24
This article was published as: Williams, S.B., Pizarro, O., Webster, J.M., Beaman, R.J., Mahon, I., Johnson-Roberson, M., Bridge, T., 2010. Autonomous Underwater Vehicle-assisted surveying of drowned reefs on the shelf edge of the Great Barrier Reef, Australia. Journal of Field Robotics 27(5), 675-697. doi:10.1002/rob.20356 Copyright © 2010 Wiley Periodicals, Inc. A link to the online edition of the Journal of Field robotics at Wiley Online Library can be found at: http://onlinelibrary.wiley.com.elibrary.jcu.edu.au/journal/10.1002/%28ISSN%291556-4967

This article was published as: Copyright © 2010 Wiley ...reefs along the shelf edge of the Great Barrier Reef (GBR) in Queensland, Australia (Webster, Beaman, Bridge, Davies, Byrne,

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

  • This article was published as:

    Williams, S.B., Pizarro, O., Webster, J.M., Beaman, R.J., Mahon, I., Johnson-Roberson, M., Bridge, T., 2010. Autonomous Underwater Vehicle-assisted surveying of drowned reefs on the shelf edge of the Great Barrier Reef, Australia. Journal of Field Robotics 27(5), 675-697. doi:10.1002/rob.20356

    Copyright © 2010 Wiley Periodicals, Inc.

    A link to the online edition of the Journal of Field robotics at Wiley Online Library can be found at: http://onlinelibrary.wiley.com.elibrary.jcu.edu.au/journal/10.1002/%28ISSN%291556-4967

    http://onlinelibrary.wiley.com.elibrary.jcu.edu.au/journal/10.1002/%28ISSN%291556-4967�

  • Autonomous Underwater Vehicle–Assisted Surveyingof Drowned Reefs on the Shelf Edge of theGreat Barrier Reef, Australia

    • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •

    Stefan B. Williams and Oscar PizarroAustralian Centre for Field Robotics, University of Sydney, Sydney, NSW 2006, Australiae-mail: [email protected] M. WebsterSchool of Geoscience, University of Sydney, Sydney, NSW 2006, Australiae-mail: [email protected] J. BeamanJames Cook University, P.O. Box 6811, Cairns, QLD 4870, Australiae-mail: [email protected] Mahon and Matthew Johnson-RobersonAustralian Centre for Field Robotics, University of Sydney, Sydney, NSW 2006, AustraliaTom C. L. BridgeSchool of Earth and Environmental Sciences, James Cook University, Townsville, QLD 4811, Australiae-mail: [email protected]

    Received 8 May 2009; accepted 7 June 2010

    This paper describes the role of the autonomous underwater vehicle (AUV) Sirius on a research cruise to surveydrowned reefs along the shelf edge of the Great Barrier Reef in Queensland, Australia. The primary function ofthe AUV was to provide georeferenced, high-resolution optical validation of seabed interpretations based onacoustic data. We describe the AUV capabilities and its operation in the context of the multiple systems used inthe cruise. The data processing pipeline involved in generating simultaneous localization and mapping–basednavigation, large-scale, three-dimensional visualizations, and automated classification of survey data is brieflydescribed. We also present preliminary results illustrating the type of data products possible with our systemand how they can inform the science driving the cruise. C© 2010 Wiley Periodicals, Inc.

    1. INTRODUCTION

    In late 2007 a scientific expedition set off to survey drownedreefs along the shelf edge of the Great Barrier Reef (GBR) inQueensland, Australia (Webster, Beaman, Bridge, Davies,Byrne, et al., 2008). The GBR is the largest and best-knowncoral reef ecosystem in the world, with the Great BarrierReef Marine Park (GBRMP) covering an area of approx-imately 344,000 km2. Owing to logistical and technologi-cal restrictions, the vast majority of research on the GBRhas occurred on shallow-water reefs less than 30 m deep.However, these reefs represent only 10% of the area of theGBRMP, and deepwater reef systems along the outer edgeof the shelf therefore represent an extensive but largely un-explored habitat.

    Drowned reefs on the edge of continental shelves ordrop-off zones of oceanic islands have been recognized

    A multimedia file may be found in the online version of this article.

    from many different areas of the world. Investigationsoff Barbados (Fairbanks, 1989), Hawaii (Webster, Clague,Riker-Coleman, Gallup, Braga, et al., 2004), Papua NewGuinea (Webster, Wallace, Silver, Potts, Braga, et al., 2004),and more recently Tahiti (Camoin, Yasufmui, & McInroy,2005) have confirmed the significance of these reefs asunique archives of abrupt global sea-level rise and climatechange. Similar structures occur along the GBR, havingbeen observed in the regions of Hydrographers Passage,Ribbon Reef (Davies & Montaggioni, 1985; Harris & Davies,1989; Hopley, 2006; Hopley, Graham, & Rasmussen, 1996),Flora Passage, Bowl Reef, and Viper Reef (Beaman, Webster,& Wust, 2008; Hopley et al., 1996). The drowned reefs ofthe GBR may reflect a complex history of growth and ero-sion during lower sea levels and are now capped by reefmaterial from the last deglaciation (Beaman et al., 2008).These reefs likely record a unique archive of abrupt climatechanges and may provide insights into how the GBR re-sponded to periods of environmental stress. As such, theyhave direct relevance for environmental managers tasked

    Journal of Field Robotics 27(5), 675–697 (2010) C© 2010 Wiley Periodicals, Inc.View this article online at wileyonlinelibrary.com • DOI: 10.1002/rob.20356

  • 676 • Journal of Field Robotics—2010

    with predicting how the GBR might respond to future cli-mate change scenarios.

    Although these drowned reefs have the potential toprovide critical information on the course of sea-level andclimatic history of eastern Australia, they have also beenshown to support important biological communities re-ferred to as mesophotic coral ecosystems (MCEs). Stud-ies from locations such as Hawaii (Kahng & Kelley, 2007)and American Samoa (Bare, Grimshaw, Rooney, Sabater,Fenner, et al., 2010) have revealed that MCEs containunique ecological communities. Because they lie beyondthe depths accessible to traditional assessment techniquessuch as SCUBA diving, MCEs remain poorly studied, par-ticularly compared to shallow-water coral reef ecosystems.Recent technological advances such as the development ofautonomous underwater vehicles (AUVs) have allowed sci-entists to explore MCEs in more detail. AUVs provide theability to collect large quantities of high-resolution, georef-erenced data and therefore represent an important tool forunderstanding the ecology of MCEs. In this study, data col-lected by an AUV are used to investigate MCEs at four siteswithin the GBRMP from 50–150-m depths. The key objec-tives for the cruise reported on here were to do the follow-ing:

    • improve our understanding of the relationship betweenthe structure, composition, and spatial distribution ofdrowned and modern reefs

    • investigate any variations within the succession ofdrowned shelf edge reefs

    • characterize the biological communities associated withthese areas

    To carry out these objectives, the study used high-resolution multibeam swath bathymetry, subbottomprofiling, AUV-based stereo imaging, and rock dredgesampling, thereby providing an unparalleled view ofthe spatial distribution and morphologic details of thedrowned reefs. The primary role of the AUV Sirius was tocollect georeferenced optical imagery to provide validationof interpretations made from sonar multibeam surveys aswell as subbottom profiling. The AUV provided optical im-agery to assist in providing crucial baseline data about themodern substrates, habitats, and biological communitiesthat characterize these poorly studied shelf environments.This paper describes the AUV system and its operation inthe context of the cruise objectives. The data processingpipeline involved in generating simultaneous localizationand mapping (SLAM)–based navigation and large-scale,three-dimensional (3D) visualizations of the seafloor isoutlined. We also examine the performance of image-basedclassification of habitats based on labels derived fromannotated images provided by an ecologist. We presentresults illustrating the type of data products that can beproduced with such a system and examine how it caninform the science driving the cruise.

    The remainder of this paper is organized as follows.Section 2 describes the various methods used to charac-terize the submerged reefs, including a description of theAUV, and Section 3 provides an overview of the algorithmsused for constructing high-resolution representations ofthe seafloor and associated visualization and classificationtools. Section 4 shows results from the AUV and how theyrelate to data collected by shipborne instruments as well asto the overall cruise objectives. Finally, Section 5 providesconclusions and discusses ongoing work.

    2. STUDY METHODS

    The science party approached the problem of survey-ing drowned shelf reefs using multiple sampling/sensingmethods suited to the cruise objectives. Broad-scale map-ping was used to confirm whether the reefs are consis-tent geomorphic features and to define patterns of past reefgrowth. Sites for targeted dredging, subbottom profiling,and AUV imaging were then selected for detailed studyof particular features. All activities were undertaken dur-ing a 21-day cruise aboard the RV Southern Surveyor, Aus-tralia’s Marine National Facility (CSIRO MNF RV SouthernSurveyor, 2008).

    2.1. Ship-Based Seafloor Mapping

    The primary sampling method used on this cruise involvedship-based seafloor mapping of four study sites along theQueensland margin [Figure 1(a)], where the approximatelocations of submerged reefs were known. Detailed multi-beam bathymetric and backscatter (seafloor reflectivity)surveys using the ship’s Simrad EM300 were used to helpdetermine their spatial distribution, depth, and morphol-ogy. These data established whether the submerged reefswere regionally significant features with consistent depths,as well as their relationship with shelf width and slope an-gle and finer scale bottom features. Subbottom profiling us-ing the shipboard Topas PS-18 subbottom profiling sonarand a sparker towed seismic array provided information asto whether shelf edge reefs were built up or the surround-ings were eroded away (erosional/constructional features)and estimates of the thickness and character of sedimentsbetween the succession of drowned reefs (Webster, Davies,Beaman, Williams, & Byrne, 2008).

    2.2. AUV Surveys

    A second sampling method used on the cruise involvedthe collection of targeted, high-resolution seafloor imageryusing an AUV to validate the interpretations of the ship-based sonar data. Although a towed camera sled is tradi-tionally used for optical characterization (Barker, Helmond,Bax, Williams, Davenport, 1999), an AUV-based approachoffers improved image quality (through improved altitudecontrol), precise positioning, and the ability to operate over

    Journal of Field Robotics DOI 10.1002/rob

  • Williams et al.: AUV Survey on the GBR • 677

    (a) (b)Figure 1. Documenting drowned reefs on the GBR: (a) the ship track taken during the research cruise showing the four surveylocations selected (Webster, Davies, et al., 2008) and (b) preliminary multibeam profiles showing the drowned reefs (Beaman et al.,2008).

    very rough bottoms. Typically the AUV will operate atlower speeds than a sled so that coverage is reduced, butin the case of optical ground truthing this tends to be com-pensated by the ability to target specific bottom features.

    AUVs are becoming significant contributors to mod-ern oceanographic research, increasingly playing a role as acomplement to traditional survey methods. A nested sur-vey design strategy is observable in most practical uses,in which broadscale sensing helps guide the deploymentof the high-resolution imaging AUV, which in turn in-forms further sampling. Large, fast-survey AUVs can pro-vide high-resolution acoustic multibeam and subbottomdata by operating a few tens of meters off the bottom,even in deep water (Grasmueck, Eberli, Viggiano, Correa,Rathwell, et al., 2006; Henthorn, Caress, Thomas, McEwen,Kirkwood, et al., 2006; Marthiniussen, Vestgard, Klepaker,& Storkersen, 2004; McEwen, Caress, Thomas, Henthorn, &Kirkwood, 2006). High-resolution optical imaging requiresthe ability to operate very close to potentially rugged ter-rain. For example, the Autonomous Benthic Explorer (ABE)has helped increase our understanding of spreading ridges,hydrothermal vents, and plume dynamics (Yoerger, Jakuba,Bradley, & Bingham, 2007) using both acoustics and vi-sion. The Seabed AUV is primarily an optical imaging AUVused in a diverse range of oceanographic cruises, includ-ing coral reef characterization (Singh, Armstrong, Gilbes,Eustice, Roman, et al., 2004). Recently, the related AUVsPuma and Jaguar searched for hydrothermal vents underthe artic ice (Kunz, Murphy, Camilli, Singh, Eustice, et al.,2008).

    The University of Sydney’s Australian Centre for FieldRobotics (ACFR) operates an oceangoing AUV called Siriuscapable of undertaking high-resolution, georeferenced sur-vey work (Williams, Pizarro, Mahon, & Johnson-Roberson,

    2008). Sirius is part of the Integrated Marine ObservingSystem (IMOS) AUV Facility, with funding available ona competitive basis to support its deployment as part ofmarine studies in Australia. This platform is a modifiedversion of a midsize robotic vehicle called SeaBED builtat the Woods Hole Oceanographic Institution (Singh, Can,Eustice, Lerner, McPhee, et al., 2004). This class of AUV hasbeen designed specifically for relatively low-speed, high-resolution imaging and is passively stable in pitch and roll.The submersible is equipped with a full suite of oceano-graphic sensors (see Table I and Figure 2).

    The AUV was programmed to maintain an altitudeof 2 m above the seabed while traveling at 0.5 m/s (1 knapprox.) during surveys. Missions lasted from 2 to 4 h,with the AUV surveying transects across drowned reefand interreefal areas to collect data that would allow forthe assessment of the substrate types and character of themodern epibenthic assemblages associated with shelf edgereefs. In addition to 2-Hz stereo imagery and multibeamdata, the AUV’s onboard conductivity/temperature (CT)and Ecopuck fluorometers sampled the water column, es-tablishing the present-day oceanographic conditions on theshelf edge.

    2.3. Dredging and Grab Sampling

    The final sampling method employed during this cruise in-volved the collection of dredged rock samples from the topsof the shelf edge reefs and grab sampling of sediment at se-lected sites. The detailed bathymetric and optical surveysprovide well-characterized and georeferenced target sitesin each study area to obtain rock and sediment samplesusing rock dredges. These dredges were towed parallel to

    Journal of Field Robotics DOI 10.1002/rob

  • 678 • Journal of Field Robotics—2010

    (a) (b)Figure 2. (a) The AUV Sirius being retrieved after a mission aboard the RV Southern Surveyor and (b) internal component layout.

    Table I. Summary of the Sirius AUV specifications.

    VehicleDepth rating 800 mSize 2.0 m (L) × 1.5 m (H) × 1.5 m (W)Mass 200 kgMaximum speed 1.0 m/sBatteries 1.5 kWh Li-ion packPropulsion Three 150 W-brushless dc thrusters

    NavigationAttitude + heading TCM2 compass/tilt sensor

    integrated into DVLDepth Digiquartz press. sensorVelocity RDI 1,200-kHz navigator DVLAltitude RDI navigatorUSBL TrackLink 1,500 HAGPS receiver Ashtech A12

    Optical imagingCamera Prosilica 12-bit, 1,360 × 1,024

    change-coupled device (CCD)stereo pair

    Lighting Two 4-J strobeSeparation 0.75 m between camera and lights

    AcousticMultibeam sonar Imagenex DeltaT 260 kHzObstacle avoidance Imagenex 852 675 kHz

    Tracking and communi-cations

    Radio Freewave RF modem/EthernetAcoustic modem Linkquest 1,500-HA integrated

    modemOther sensors

    CT Seabird 37SBIClorophyll-A andturbidity

    Wetlabs FLNTU Ecopuck

    depth contours and along the features in order to collectsamples of similar age and composition from the last phaseof reef growth. This method has proved to be effective insampling similar deposits in the southern GBR (Davies,Braga, Lund, & Webster, 2004), Hawaii (Ludwig, Szabo,Moore, & Simmons, 1991), and Tahiti (Camoin, Cabioch,Eisenhauer, Braga, Hamelin, et al., 2006). Postcruise sam-ple analyses involve sedimentary facies, radiometric dat-ing, and palaeoclimate investigations to establish the ageand composition of the reefs, the timing and rate of drown-ing and ecosystem response to rapid sea-level rise, envi-ronmental stress, and changes in sea-surface temperaturesand ocean chemistry. The data will be integrated with thegeophysical data sets to better understand reef growth anddemise.

    It is interesting to consider the relative strengths of theAUV and dredging operations in support of the study ob-jectives. Unlike dredging, the AUV georeferenced imagerymaintains the spatial relationships of observed seafloor fea-tures. Furthermore, sampling using dredging may intro-duce bias toward things that actually break off when a towsled is dragged along the seafloor. The AUV imagery on theother hand can be analyzed for substrate and epibenthiccover at scales from subcentimeters to kilometers. How-ever, recovering samples from these drowned reefs is in-strumental in conclusively determining the age and com-position of these structures and identifying modern benthicspecies inhabiting these areas (and potentially observed inthe imagery). The AUV and dredging operations thereforeprovide complementary data to support the study objec-tives. In Section 4 we provide an example of a dive in whichthe AUV imagery was used to identify dense aggregationsof brittlestars, which were later sampled using a grab sam-pler to facilitate species identification.

    Journal of Field Robotics DOI 10.1002/rob

  • Williams et al.: AUV Survey on the GBR • 679

    3. AUV NAVIGATION AND HABITAT MAPPING

    This section presents a brief overview of techniques be-ing used to recover detailed seafloor maps based on thestereo imagery and navigation data available to the vehicle.It also describes our approach to automated classificationof the imagery to help manage the volume of data beingcollected.

    3.1. Simultaneous Localization and Mapping

    Georeferencing the data collected by the AUV requires asuitable technique to estimate the vehicle’s pose through-out the dive. Navigation underwater is challenging be-cause electromagnetic signals in water attenuate stronglywith distance, precluding the use of absolute position ob-servations such as those provided by global positioningsystem (GPS). Acoustic positioning-based systems (Yoergeret al., 2007) can provide absolute positioning but typicallyat lower precision than that provided by the environmentalinstruments onboard the AUV (i.e., cameras and sonars).Recent work has demonstrated methods that use the sen-sor data collected by the vehicle to aid in the navigationprocess and ensure that poses are consistent with obser-vations of the environment. SLAM is the process of con-currently building a map of the environment and usingthis map to obtain estimates of the location of the vehicle.The SLAM algorithm has seen considerable interest fromthe mobile robotics community as a tool to enable fullyautonomous navigation (Dissanayake, Newman, Clark,Durrant-Whyte, & Csobra, 2001; Durrant-Whyte & Bailey,2006).

    Early work in the deployment of the SLAM algo-rithm in reef environments has been reported follow-ing trials with the ACFR’s unmanned underwater vehi-cle Oberon and the AUV Sirius operating on the GBRin Australia (Williams & Mahon, 2004). This work laidthe foundation for 3D reconstructions of these highlyunstructured environments. Related work at the WoodsHole Oceanographic Institution has also examined theapplication of SLAM (Eustice, Singh, Leonard, & Wal-ter, 2006; Roman & Singh, 2007) and structure from mo-tion (SFM) (Pizarro, Eustice, & Singh, 2003) methods todata collected by remotely operated vehicles (ROVs) andAUVs.

    Our current work has concentrated on efficient, stereo-based SLAM and dense scene reconstruction suitable forcreating detailed maps of survey sites. Methods for stereo-vision motion estimation and their application to SLAMin underwater environments have been proposed (Mahon,2008; Mahon, Williams, Pizarro, & Johnson-Roberson,2008). These techniques are based on the visual augmentednavigation (VAN) technique (Eustice et al., 2006). In theVAN framework, the current vehicle state is estimatedalong with a selection of past vehicle poses, leading to a

    state estimate vector of the form

    x̂+ (tk) =

    ⎡⎢⎢⎢⎢⎢⎣

    x̂+p1 (tk)...

    x̂+pn (tk)x̂+v (tk)

    ⎤⎥⎥⎥⎥⎥⎦ =

    [x̂+t (tk)x̂+v (tk)

    ], (1)

    where x̂+v (tk) is the posterior estimate1 of the cur-rent vehicle states xv (tk) at time (tk) and x̂+t (tk) =[x̂+Tp1 (tk) , . . . , x̂

    +Tpn

    (tk)]T is a vector of trajectory statesconsisting of n past vehicle pose estimates. In this imple-mentation, the vehicle state is modeled using the full six-degree-of-freedom pose and associated velocities of the ve-hicle relative to a local navigation frame defined at the startof the mission such that

    xv (tk) =

    ⎡⎢⎢⎢⎣

    npv (tk)nψv (tk)vvv (tk)

    ωv (tk)

    ⎤⎥⎥⎥⎦ , (2)

    with npv (tk) representing the position of the vehicle in thenavigation frame, nψv (tk) representing the Euler angles ofthe vehicle in the navigation frame, vvv representing the ve-locity of the vehicle in the local vehicle frame, and ωv (tk)representing the local body rotation rate.

    The covariance matrix has the form

    P+ (tk) =[

    P+t t (tk) P+tv (tk)P+Ttv (tk) P+vv (tk)

    ]. (3)

    In the information form, the filter maintains the infor-mation matrix Y+ (tk), which is the inverse of the covariancematrix

    Y+ (tk) =[P+ (tk)

    ]−1, (4)

    and the information vector ŷ+ (tk), which is related to thestate estimate by

    ŷ+ (tk) = Y+ (tk) x̂+ (tk) . (5)The VAN information vector has the form

    ŷ+ (tk) =[

    ŷ+t (tk)ŷ+v (tk)

    ], (6)

    and the information matrix is

    Y+ (tk) =[

    Y+t t (tk) Y+tv (tk)

    Y+Ttv (tk) Y+vv (tk)

    ]. (7)

    The VAN estimation process uses the standard extended in-formation filter three-step prediction, observation, and up-date cycle. The filter process and observation model param-eters used in this case are shown in Table II.

    1We use the notation of Gelb (1996) and denote the estimate usingthe ˆ, the posterior using +, and the prior using −.

    Journal of Field Robotics DOI 10.1002/rob

  • 680 • Journal of Field Robotics—2010

    Table II. AUV Sirius navigation filter parameters.

    Process noise Sensor Std Dev

    X, Y velocity 1.0 m/s2

    Z velocity 0.75 m/s2

    Roll, pitch rate 0.2 deg/s2

    Heading rate 0.5 deg/s2

    Observation

    Attitude DVL (TCM2 roll/pitch) 0.25 degHeading DVL (TCM2 compass) 1.0 degDepth Digiquartz pressure sensor 5 mmVelocity RDI 1,200-kHz navigator DVL 3 mm/sUSBL TrackLink 1,500 HA 5-m range

    5-deg bearingGPS receiver Ashtech A12 5 m

    3.1.1. Prediction

    The vehicle states are assumed to evolve according to a pro-cess model of the form

    xv (tk) = fv(xv (tk−1)) + w (tk) , (8)

    where the vehicle state estimate is modeled using a con-stant velocity model fv and w (tk) is an error vector from azero-mean Gaussian distribution with covariance Q (tk). Di-rect observations of the vehicle velocity over ground in thelocal vehicle reference frame are provided by the Dopplervelocity log (DVL) as an observation to the filter.

    When propagating the vehicle states to a new timestepwith a prediction operation, the previous vehicle pose iskept in the state vector if it records the location wherea stereo pair of images was acquired. Maintaining suchposes allows loop-closure observations produced from vi-sion data to be applied to the filter. When propagating thefilter forward from the time of a vehicle depth, attitude, ve-locity, or ultra short baseline (USBL) observation, the pre-vious pose is marginalized from the state vector (Mahon,2008; Mahon et al., 2008).

    3.1.2. Observation

    Observations are assumed to be made according to a modelof the form

    z (tk) = h(x (tk)) + v (tk) , (9)

    in which z (tk) is an observation vector and v (tk) is a vectorof observation errors with covariance R (tk). The differencebetween the actual and predicted observations is the inno-vation

    ν (tk) = z (tk) − h(x̂− (tk)). (10)

    Velocity

    Observations of the vehicle velocity in the local body frameare provided by the vehicle’s DVL. This device providesmeasurements of the speed over ground of the vehicle atdistances up to 45 m from the seafloor:

    zdvl (tk) =[vẋv

    vẏvvżv

    ]T. (11)

    Depth

    Measurements of depth are inferred from measurementsprovided by the vehicle’s pressure sensor:

    zdepth (tk) =[nzv

    ]. (12)

    Attitude

    The vehicle’s RDI navigator DVL includes an integratedcompass with roll and pitch sensor. Given the design of thevehicle, the roll and pitch are relatively stable once the ve-hicle is underway:

    zatt =[φv θv ψv ]T . (13)

    Visual Loop Closures

    Loop-closure observations are created using a six-degree-of-freedom stereo vision relative pose estimation algorithm(Mahon, 2008). The SIFT algorithm (Lowe, 1999) is used toextract and associate visual features, and epipolar geome-try is used to reject inconsistent feature observations withineach stereo image pair. Triangulation (Hartley & Zisser-man, 2000) is performed to calculate initial estimates of thefeature positions relative to the stereo rig, and a redescend-ing M-estimator (Maronna, Martin, & Yohai, 2006) is usedto calculate a relative pose hypothesis that minimizes a ro-bustified registration error cost function. Any remainingoutliers with observations inconsistent with the motion hy-pothesis are then rejected. Finally, the maximum likelihoodrelative vehicle pose estimate and covariance are then cal-culated from the remaining inlier features. An example setof stereo image pairs and the visual features used to pro-duce a loop-closure observation are presented in Figure 3.

    The observation of previous pose i relative to the cur-rent vehicle pose is estimated using the compound trans-formation that relates the two poses. This estimate is com-pared against the maximum likelihood relative vehicle poseestimate:

    zvis (tk) =[vxi

    vyivzi

    vφivθi

    vψi]T

    . (14)

    Ultra Short Baseline Positioning

    Observations of range and bearing between the supportship and the vehicle are provided by a USBL positioningsystem. Observations of ship position and attitude avail-able from the ship’s navigation instruments are combinedwith the measured position of the sonar head (in this casemounted in a moon pool near the center of the ship) to pro-vide an estimate of the transceiver location. The sensor pro-vides a measurement of the range R and bearing β between

    Journal of Field Robotics DOI 10.1002/rob

  • Williams et al.: AUV Survey on the GBR • 681

    (a) Small coral outcrop

    (b) Sandy substrate

    Figure 3. Loop-closure examples showing the successfullymatched features on a variety of bottom types. Each panelshows the left and right stereo pair from one pose on the topand the corresponding left and right pair for the second poseon the bottom. The change in brightness between the pairs isa result of differences in altitude at which the vehicle capturedthe images.

    the transceiver head and the vehicle in the transceiverframe of reference:

    zusbl (tk) =[R β

    ]T. (15)

    3.1.3. Update

    The innovation is used to update the information vectorand matrix:

    ŷ+ (tk) = ŷ− (tk) + i (tk) , (16)Y+ (tk) = Y− (tk) + I (tk) , (17)

    in which

    i (tk) = ∇xh (tk) R−1 (tk) [ν (tk) + ∇xh (tk) x̂− (tk)], (18)I (tk) = ∇xh (tk) R−1 (tk) ∇Tx h (tk) , (19)

    where ∇xh (tk) is the Jacobian of the observation functionwith respect to the vehicle states.

    The SLAM filter uses the information form due to theefficiency of observation updates. Methods to efficiently re-cover pose estimates and covariances from the informationvector and matrix are detailed in Mahon (2008) and Mahonet al. (2008). These methods allow the data collected by thevehicle to be processed while at sea, allowing the scien-tists to examine the georeferenced imagery and associatedseafloor models between dives. Examples of SLAM beingapplied to data collected during this cruise are presented inSection 4.

    3.2. Seafloor 3D Reconstructions

    Although SLAM recovers consistent estimates of the ve-hicle trajectory, the estimated vehicle poses themselves donot provide a representation of the environment suitablefor human interpretation. A typical dive will yield sev-eral thousand georeferenced overlapping stereo pairs. Al-though useful in themselves, single images make it difficultto appreciate spatial features and patterns at larger scales.We have developed a suite of tools to combine the SLAMtrajectory estimates with the stereo image pairs to generate3D meshes and place them in a common reference frame(Johnson-Roberson, Pizarro, Williams, & Mahon, 2010). Theresulting composite mesh allows a user to quickly and eas-ily interact with the data while choosing the scale and view-point suitable for the investigation. Spatial relationshipswithin the data are preserved, and users can move from ahigh-level view of the environment down to very detailedinvestigation of individual images and features of interestwithin them. This is a useful data exploration tool for theend user to develop an intuition of the scales and distribu-tions of spatial patterns within the seafloor habitats.

    The process used to generate 3D reconstructions canbe broken down into the following main steps as shown inFigure 4:

    1. Data Preprocessing. The primary purpose of the pre-processing step is to partially compensate for lightingand wavelength-dependent color absorption. This al-lows improved feature extraction and matching duringthe next stage.

    2. Stereo Depth Estimation. Extracts two-dimensional(2D) feature points from each image pair, robustly pro-poses correspondences, and determines their 3D posi-tion by triangulation. The local 3D point clouds are con-verted into individual Delaunay triangulated meshes.

    3. Mesh Aggregation. Places the individual stereo meshesinto a common reference frame using SLAM-basedposes described in the preceding section. The system

    Journal of Field Robotics DOI 10.1002/rob

  • 682 • Journal of Field Robotics—2010

    Figure 4. Processing modules and data flow for the recon-struction and visualization pipeline (Johnson-Roberson et al.,2010).

    fuses the meshes into a single mesh using volumetricrange image processing (VRIP) (Curless & Levoy, 1996).The total bounding volume is partitioned so that stan-dard volumetric mesh integration techniques operateover multiple smaller problems while minimizing dis-continuities between integrated meshes. This stage alsoproduces simplified versions of the mesh to allow forfast visualization at broad scales.

    4. Texturing. The polygons of the complete mesh are as-signed textures based on the overlapping imagery thatprojects onto them. Lighting and misregistration arti-facts are reduced by separating images into spatial fre-

    quency bands that are mixed over greater extents forlower frequencies (Burt & Adelson, 1983).

    5. Visualization. Some AUV missions have upward of10,000 pairs of images, which expands to hundredsof millions of vertices when processed to generate thestereo meshes. It is simply intractable to load all of thesedata directly into memory. To view the data we use adiscrete paged level of detail scheme (Clark, 1976), inwhich several discrete simplifications of geometry andtexture data are generated and stored on disk. Level ofdetail (LOD) in 3D meshes allows viewing extremelylarge models with limited computational bandwidth byreducing the complexity of a 3D scene in proportion tothe viewing distance or relative size in screen space.

    Section 4 and the multimedia extensions present ex-amples of detailed 3D surface models derived from datacollected during a number of dives completed during thiscruise.

    3.3. Habitat Classification

    Although the visualization of detailed 3D reconstructionsimproves our ability to understand the spatial layout ofseafloor features, further analysis and interpretation of thedata gathered during a dive is required to address taskssuch as habitat characterization and monitoring. This anal-ysis stage is typically performed by human experts, whichlimits the amount and speed of data processing (Holmes,Niel, Radford, Kendrick, & Grove, 2008). It is unlikely thatmachines will match humans at fine-scale classification anytime soon, but machines can now perform preliminary,coarse classification to provide timely and relevant feed-back to assist human interpretation and focus attentionon features of interest. We are developing an image-basedhabitat classification and clustering system to facilitate theanalysis of the large volumes of image data collected bythe AUV (Pizarro, Colquhoun, Rigby, Johnson-Roberson, &Williams, 2008; Pizarro, Williams, & Colquhoun, 2009).

    We have adapted a state-of-the-art object recognitionsystem (Nister & Stewenius, 2006) to recognize and classifymarine habitat imagery based on labeled examples (Pizarroet al., 2008). These approaches are inspired in the “bag ofwords” technique to document recognition and have beendemonstrated on challenging tasks (Philbin, Chum, Isard,Sivic, & Zisserman, 2007; Sivic & Zisserman, 2003). Thetechniques represent an image (document) as a collectionof visual features (words), which are further grouped us-ing probabilistic topic models. A topic model θ in a col-lection of images (documents) is a probability distribution{p(w|θ )}w∈V of visual features (words) w over the vocabu-lary set V . It represents a semantically coherent topic withhigh-probability words collectively suggesting a semantictheme. As such, topics form an intermediate level of ab-straction and can be considered a form of dimensional-ity reduction that aids human interpretation. Instead of

    Journal of Field Robotics DOI 10.1002/rob

  • Williams et al.: AUV Survey on the GBR • 683

    modeling an image as a bag of words, the image is repre-sented as a mixture of topics.

    A set of training documents is used to build a vocab-ulary of topics so that query documents can be describedby the frequency of occurrence of all the topics in the vo-cabulary set. In the case of images, these approaches typ-ically use local features that select distinctive regions anddescribe them in a manner that is robust or invariant tochanges in scale and orientation. The approach finds theimages in the training set that are closest to the query im-age. If we can assign a class (habitat) label to each trainingimage, it is possible to assign a class to a query image basedon the class of the closest training image. Section 4 exam-ines the effectiveness of using these visual topics to classifyimages based on labeling provided by a human expert. Theperformances of a number of a common classifiers are com-pared on this habitat classification task.

    4. RESULTS

    This section presents results of these activities, illustrat-ing how the navigation information, 3D reconstructions,and classified imagery complement data collected by other

    instruments during the cruise. Four sites along the extentof the GBR, as shown in Figure 1(a), were selected for de-tailed bathymetric mapping, AUV seafloor imaging, dredg-ing, and grab sampling to determine substrate compositionduring the cruise (Webster, Davies, et al., 2008). Figures 5and 6 show bathymetric maps and AUV dive profiles ofthree of the four study sites. These maps were used to iden-tify potential AUV dive sites and the location of subbottomprofiling, grab sampling, and dredging operations. Higherresolution versions of these maps are being used to studythe relict reef sites in more detail.

    Over the course of the 3-week voyage, the AUV wasdeployed at nine locations, undertaking both overlappinggrid surveys of particular features and cross-shelf tran-sects to document the variability in benthic habitats as afunction of depth. The AUV imagery shows a variety ofbenthic communities and substrates that include red algae–encrusted fossil rock, thriving hard and soft coral, gorgoni-ans (sea whip or fan), sponge communities, and Halimeda(green algae). Table III provides a summary of the divesundertaken during the course of the cruise. As can be seen,the vehicle covered in excess of 26 linear kilometers andcollected more than 115,000 images. Figures 5(c), 5(d), 6(c),

    (a) Noggin Pass (b) Viper Reef

    (c) Noggin Pass detail (d) Viper Reef detailFigure 5. Bathymetric maps of the Noggin Pass and Viper Reef study sites. (a) and (b) Relatively low-resolution bathymetric mapsgenerated during the cruise. (c) and (d) Details of the dives overlaid on the local bathymetry. The circled numbers correspond todive IDs shown in Table III.

    Journal of Field Robotics DOI 10.1002/rob

  • 684 • Journal of Field Robotics—2010

    (a) Hydrographers Passage

    (b) Hydrographers Passage detail 1 (c) Hydrographers Passage detail 2Figure 6. Bathymetric maps of the Hydrographers Passage study site. (a) Relatively low-resolution bathymetric maps generatedduring the cruise. (b) and (c) Details of the dives overlaid on the local bathymetry. The circled numbers correspond to dive IDsshown in Table III.

    and 6(d) show profiles of the eight missions flown by thevehicle, illustrating the variety of dive profiles undertakenand the flexibility in mission planning afforded by the AUV.Each dive was targeted at a particular benthic feature andwas designed with respect to the bathymetric information

    available. For example, dives 5, 7, and 8 each featured rela-tively dense overlapping grids, with dives 7 and 8 designedto survey habitats at different depths in a single dive. Onthe other hand, dives 4, 6, and 9–11 were designed to covermore ground and encompass a greater variety of benthic

    Table III. Dive summary.

    Dive ID Start time (UTC) Dur. (h) Lat. Long. Max D. (m) Dist. (m) Av. alt. (m) Stereo pairs

    2 28/09/07 01:30 0:06 −15.378 145.766 60.7 174 2.18 8084 03/10/07 23:09 1:25 −17.094 146.566 97.8 1,519 1.94 7,5485 04/10/07 01:34 0:55 −17.094 146.566 64.2 1,518 2.08 6,7356 05/10/07 04:45 0:40 −18.879 148.449 98.5 1,143 2.01 5,2957 06/10/07 00:11 4:15 −18.880 148.437 147.7 6,198 2.03 25,9588 07/10/07 02:46 2:40 −18.882 148.442 88.8 4,729 2.00 19,5119 10/10/07 23:09 2:00 −19.693 150.231 147.0 3,760 2.07 15,91610 11/10/07 04:37 1:45 −19.701 150.246 93.0 2,676 1.97 12,66011 12/10/07 04:21 3:00 −19.868 150.460 70.2 4,304 2.04 21,349Total 16:46 26,021 115,780

    Journal of Field Robotics DOI 10.1002/rob

  • Williams et al.: AUV Survey on the GBR • 685

    habitats. Dive 2, undertaken at the Ribbon Reef site, wasunusual in that a fault with the vehicle software preventedthe vehicle from completing its intended mission and is notshown here. Dives 1 and 3 were engineering dives aimed atensuring the vehicle operations prior to the scientific mis-sions and are also not shown here.

    The flexibility with which the vehicle could be pro-grammed allowed for a tight coupling between other databeing collected, such as the ship-borne multibeam, and thedesign of the AUV dive profiles. This allowed the scien-tists to plan missions targeting particular features basedon bathymetric maps being generated by the ship’s sonar.During deployments, the vehicle was programmed to com-plete a particular dive profile designed according to the de-sired mission objectives. The vehicle operated completelyautonomously of the ship, and its progress was monitoredusing the USBL mounted on the ship’s hull. An acousticlink allowed updated estimates of the vehicle state anddiagnostic information to be transmitted to the ship. TheUSBL observations were also communicated to the vehi-cle but were not fused into the vehicle estimate during thedeployment. The vehicle’s estimated state while underwaywas based on the dead-reckoning information provided bythe DVL, attitude, and depth sensor.

    The following sections describe three of the dives inmore detail, demonstrating the improvement in navigationestimates achievable using our SLAM techniques and illus-trating how the AUV imagery complements other data col-lected during the cruise.

    4.1. Dive 5: Overlapping Grid Survey

    Dive 5 at Noggin Pass, shown in Figure 5(c), consisted of a100 × 100 m overlapping grid survey. This dive was posi-tioned in approximately 60 m of water over an area char-acterized by suspected ancient shoals. The vehicle coveredapproximately 1.5 km of linear travel, achieving a maxi-mum depth of 65 m. The bottom was largely composed ofsandy substrate, with a number of relict coral outcrops sup-porting modern coral and Halimeda.

    A comparison of the estimated trajectories producedby dead reckoning and two SLAM filters for this deploy-ment is shown in Figure 7. The filters integrate the DVL ve-locity observations with measures of depth, heading, roll,and pitch. The dead-reckoning filter is not given access toimagery and is therefore not able to correct for drift thataccumulates in the vehicle navigation solution. In contrast,loop closures identified in the imagery allow for this driftto be identified and for the estimated vehicle path to becorrected. In the first instance, the filter is provided withall of the USBL observations collected during the mission.This can be considered the best navigation estimate we cangenerate for these deployments. The second filter uses theUSBL observations only during the descent and ascent por-tions of the dive. This allows the mission to be georefer-

    enced for comparison with the full solution but relies onvisual loop closures to maintain the consistency of the poseestimates during the dive. Although the absolute positionof the SLAM estimates differ, the general shapes of the tra-jectories are consistent with a constant offset related to thegeoreferencing of the resulting dive profile. A third filterthat uses dead reckoning only is not consistent with theloop closures identified during the dive. This can be seenin the close-up portions of the figure. This survey includedmore than 6,500 image pairs, and loop-closure observationswere identified at each crossing using both SLAM filters,shown by the magenta and cyan lines joining estimatedposes.

    Figure 8 shows a comparison of the 2σ standard devi-ation uncertainty bounds in the estimates for the three fil-ters. As can be seen, the uncertainty in the dead-reckonedposition grows during the course of the mission until theUSBL observations are fused at the end of the run. The loopclosures allow the SLAM filters to limit the growth of un-certainty in the vehicle position. For each of the SLAM fil-ters, both the vehicle covariance during the run and the co-variance of the final pose estimates are shown. As can beseen, all of the poses achieve an uncertainty equivalent tothe smallest pose uncertainty of the vehicle during the dive.This is a result of the correlations between estimated vehi-cle poses that are maintained by the filter.

    Figure 9(a) illustrates the improvement in consistencyof the estimates afforded by the pose-augmented SLAM so-lution. In this case, the best estimate of the vehicle pose isprovided by the filter that integrates all of the USBL ob-servations. Although this is not an entirely independentsource of ground truth, the covariance plots suggest thatthe filter is providing estimates with an uncertainty on theorder of a few meters. Taking the difference between thedead-reckoned estimate of vehicle trajectory and this filtersolution shows that there are significant differences in theestimated vehicle pose during the course of the dive (onthe order of 15 m in some instances). The estimated vehi-cle poses provided by the SLAM filter, on the other hand,show a difference of less than 5 m throughout the dive rel-ative to the full solution, and this offset is constant for allposes except in the immediate vicinity of the USBL obser-vations. This is largely a result of the number of loop clo-sures available to constrain the vehicle poses in this caseand effectively translates into the residual difference in thegeoreferenced pose of the two solutions. We can also ex-amine the consistency of the projection of the loop-closurefeature positions using the estimated vehicle poses associ-ated with the loop closures. The SLAM filters are designedto minimize this error for features that are part of the loop-closure constraints. Figure 9(b) shows histograms of the L2

    norm of the 3D registration errors for the case of the fullSLAM solution (including USBL), the filter that integratesUSBL only at the start and end of the dive, and the dead-reckoning filter. As can be seen, the 3D registration errors

    Journal of Field Robotics DOI 10.1002/rob

  • 686 • Journal of Field Robotics—2010

    −80 −60 −40 −20 0 20 40 60

    −40

    −20

    0

    20

    40

    60

    East (m)(a)

    (b) (c)

    Nor

    th (

    m)

    SLAM (full soln)SLAM (no USBL fixes)Dead ReckoningSLAM (full) LCSLAM (no USBL) LC

    −50 −40 −30 −20 −10 0 10 2010

    20

    30

    40

    50

    60

    70

    80

    East (m)

    Nor

    th (

    m)

    −75 −70 −65 −600

    5

    10

    15

    East (m)

    Nor

    th (

    m)

    Figure 7. (a) Comparison of dead-reckoning and SLAM vehicle trajectory estimates for grid survey 5. The full SLAM solution(including all USBL observations recorded during the dive) is shown in red with triangle markers, the SLAM trajectory with USBLused only at the start and end of the dive is shown in black with circle markers, and the dead-reckoning estimates with USBLobservations also available only at the start and end of the dive are shown in green with plus sign markers (the markers areshown at every 10th image position). (b) A close-up of a portion of the dive showing the visual loop closures in more detail. Themagenta and cyan lines connect vehicle poses that have had a loop closure applied for the full and limited USBL SLAM solutions,respectively. The camera is mounted approximately 0.8 m forward of the navigation frame origin, resulting in the loop-closurepose estimates being offset from the crossover itself. (c) Notice that the relative positions of the transects are consistent betweenthe SLAM solutions but that the crossover in the dead-reckoning solution is significantly closer to the corner. This would result inan error in the resulting seafloor model.

    Journal of Field Robotics DOI 10.1002/rob

  • Williams et al.: AUV Survey on the GBR • 687

    0 500 1000 1500 2000 2500 3000 3500 4000 4500

    0

    5

    10

    N S

    td D

    ev (

    m)

    Vehicle cov (full soln)Pose cov (full soln)Vehicle cov (SLAM)Pose cov (SLAM)Vehicle cov (DR)Pose cov (DR)

    0 500 1000 1500 2000 2500 3000 3500 4000 4500

    0

    5

    10

    E S

    td D

    ev (

    m)

    0 500 1000 1500 2000 2500 3000 3500 4000 4500

    0

    0.05

    0.1

    Z S

    td D

    ev (

    m)

    Time (s)

    Figure 8. The 2σ standard deviation bounds for the full SLAM solution (including all USBL observations), the solution that fusesUSBL only at the beginning and end of the trajectory, and the dead-reckoning covariances.

    0 500 1000 1500 2000 2500 3000 3500 4000 45000

    2

    4

    6

    8

    10

    12

    14

    Δ E

    st (

    m)

    Time (s)

    Δ Est (SLAM)

    Δ Est (DR)

    (a) Difference in estimates

    0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.50

    200

    400

    600

    SLAM (full soln) (m)

    0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.50

    200

    400

    600

    SLAM (no USBL fixes) (m)

    0 2 4 6 8 10 120

    100

    200

    300

    400

    Dead Reckoning (m)

    (b) LC 3D registration errorFigure 9. (a) The difference between the full SLAM solution (including USBL observations throughout) and SLAM trajectorythat uses USBL only at the start and end of the dive is shown in black, and the difference between the full SLAM solution anddead-reckoning estimates is shown in green. (b) Histograms of the L2 norm of the 3D position errors for loop-closure features forthe case of the full SLAM solution (including USBL), the filter that integrates USBL only at the start and end of the dive, and thedead-reckoning filter.

    Journal of Field Robotics DOI 10.1002/rob

  • 688 • Journal of Field Robotics—2010

    0 500 1000 1500 2000 2500 3000 3500 4000 4500

    0

    0.5

    1

    1.5

    Xv

    inn

    (m/s

    )

    0 500 1000 1500 2000 2500 3000 3500 4000 4500

    0

    0.5

    1

    1.5

    Yv

    inn

    (m/s

    )

    0 500 1000 1500 2000 2500 3000 3500 4000 4500

    0

    0.5

    1

    1.5

    Zv

    inn

    (m/s

    )

    Time (s)

    (a) Velocity innovations

    0 500 1000 1500 2000 2500 3000 3500 4000 4500

    0

    0.05

    0.1

    Rol

    l inn

    (ra

    d)

    0 500 1000 1500 2000 2500 3000 3500 4000 4500

    0

    0.05

    0.1

    0.15

    Pitc

    h in

    n (r

    ad)

    0 500 1000 1500 2000 2500 3000 3500 4000 4500

    0

    0.1

    0.2

    Hdg

    inn

    (rad

    )

    Time (s)

    (b) Attitude innovations

    0 500 1000 1500 2000 2500 3000 3500 4000 4500

    0

    0.2

    0.4

    0.6

    0.8

    Dep

    th in

    n (m

    )

    Time (s)

    (c) Depth innovations

    0 500 1000 1500 2000 2500 3000 3500 4000 4500

    0

    10

    20

    30

    40

    50

    60

    70R

    ange

    inn

    (m)

    0 500 1000 1500 2000 2500 3000 3500 4000 4500

    0

    0.1

    0.2

    0.3

    Bea

    ring

    inn

    (rad

    )

    Time (s)

    (d) USBL innovationsFigure 10. Innovations and 2σ innovation covariances for the full SLAM solution in red (including all USBL observations), theSLAM solution with USBL only available during the dive and ascent in black, and the dead-reckoning innovations in green. (a) Ve-locity innovations for the observations of speed over ground in the vehicle frame of reference provided by the DVL. (b) Attitudeinnovations for roll, pitch, and yaw of the vehicle provided by the TCM2 sensor integrated into the DVL. (c) Depth innovationsprovided by the Paroscientific pressure sensor and (d) range and bearing innovations for positioning fixes provided by the Track-link USBL. Notice that the USBL observations are not available to the SLAM filter but the innovations are still consistent at the endof nearly an hour without external observations.

    for the two SLAM filters show very similar characteristics,with the majority of the estimated loop-closure feature lo-cations being within 5 cm of one another when reprojectedinto space. The dead-reckoning filter, on the other hand,features errors on the order of 2–4 m for most loop closures,with some exhibiting errors of up to 10 m, suggesting thatthere are significant errors in the estimated relative poses inthis case.

    The innovations for the observations can also be ex-amined to verify the consistency of the filter estimates.Figure 10 shows the innovations and associated 2σ innova-tion covariances for the measurements of velocity, attitude,depth, and range/bearing. The innovations appear whiteand are bounded by the innovation covariance. There are anumber of outliers in the USBL data that are rejected by thefilter. Notice that the USBL observations are not available

    Journal of Field Robotics DOI 10.1002/rob

  • Williams et al.: AUV Survey on the GBR • 689

    (a) Survey overview

    (b) Coral mound (c) Loop-closure detailFigure 11. 3D reconstruction of a drowned reef during the GBR survey. The site was in approximately 60 m of water and wasselected based on multibeam bathymetric models collected from the surface. (a) An overview of the survey pattern. Darker areasoccur when the vehicle maneuvers over a reef bommy, some up to 6 m in height, resulting in a wider but poorly lit swath. (b) Adetailed view of a small coral outcrop with other transects apparent in the background. (c) The loop closure shown in Figure 7(c),showing how the coral mound is correctly resolved along the two transects.

    to the SLAM filter but the innovations are still consistent atthe end of nearly an hour without external observations.

    A 3D overview of the dive profile is show inFigure 11(a), and Figures 11(b) and 11(c) show details oftwo reef outcrops, highlighting the seafloor structure en-countered by the vehicle during this deployment. Some ofthese outcrops were up to 6 m in height and appeared to becovered in modern coral, sponge, and algae, illustrating theimportance of these structures for modern reef organisms.These communities have not been extensively studied be-cause of their location beyond traditional SCUBA divingdepths.

    4.2. Dive 9: Cross-Shelf Transect

    Figure 12 shows a detailed view of one of the AUV divesites at Hydrographers Passage (dives 9 and 10) overlaid

    on the bathymetry. The objective of the pair of dive tracksvisible in this image was to assess the habitat compositionon the 50-m feature and to undertake a cross-shelf transectout into deeper water. The site was selected following high-resolution bathymetric surveying to examine ancient fore-shore structures. The vehicle was initially deployed overrelict reefs in 50 m of water. As can be seen in Figure 12,it then traveled to the northeast over a succession of deep,relict reefs, descending to a depth of 150 m. Subbottom pro-filing and dredging were also undertaken in the same area.Once the vehicle was recovered, it was possible to exam-ine the bottom composition correlated with the depths. TheAUV imagery showed a number of distinct benthic habi-tats. These included modern reef areas in the photic zone(

  • 690 • Journal of Field Robotics—2010

    (a)

    (b)

    Figure 12. (a) AUV dive tracks 9 and 10 overlaid on the de-tailed bathymetric profile (EM300 multibeam) showing a suc-cession of drowned shorelines and reef structures (Webster,Davies, et al., 2008). In addition to the AUV transects (yellow),sparker subbottom profiling (cyan lines) and dredge and grabsampling were undertaken at this site. (b) An oblique viewof the bathymetry illustrating the structure of the terrain sur-veyed during dives 9 and 10.

    by hard substrate supporting modern deepwater benthicassemblages, and a deepwater area characterized by flatsandy substrate.

    We conducted an ecological analysis of a subset of theimages to describe the composition of substrate types andtaxonomic groups visible in every 20th image for this dive(n = 718). This provided a virtual quadrat (composed ofa single image) at approximately 5-m intervals across theseafloor along the entire length of the transect. Data wererecorded using a grading system based on estimated rela-tive abundance of five substrate types and 14 major taxo-nomic groups as shown in Table IV. The scoring followedthe methods of Done (1982), where 1 = 80% for any substrate type in each image.Sessile benthic taxa occurred in 97% of analyzed imagesalong the entire length of the transect and in all substrategroups. The highest abundance and diversity of taxa oc-curred on the shallow reef and rubble substrates above a

    Table IV. Substrate and taxon metrics.

    Substrate Taxon group

    1. Rough limestone 6. Fan gorgonian2. Sediment-covered limestone 7. Other sponge3. Rubble 8. Carteriospongia4. Gravel 9. Antipatharia5. Sand 10. Dendronephthya

    11. Branching scleractinia12. Other scleractinia13. Zooxanthellate octocoral14. Crustose coralline algae15. Halimeda16. Cyanobacteria17. Green algae18. Red algae19. Benthic foraminifera

    70-m depth. The middle section of the transect, occurringon the flat, sandy region from 90 to 100 m, is dominated bybenthic foraminifera and algae. The topographically com-plex region from 100 to 140 m is dominated by azooxan-thellate filter feeders, particularly octocorals. Differences inthe benthic assemblages are particularly evident when theimages are examined, and we were interested to investi-gate whether a classifier based on automatically derivedimage features would be capable of learning to classify theimages using habitat clusters derived based on the expertinput.

    A K-means clustering algorithm with five groups wasused to cluster the images based on the ecological abun-dance metrics. Figure 13 shows the dive track and depthprofile of the vehicle color coded by cluster number. As canbe seen, a number of distinct groupings are present withinthe data, corresponding to the following groups:

    • C1 corresponding to the relatively flat shelf• C2 occurring in the deep water where the sand was rel-

    atively smooth and unpopulated• C3 present on steeply sloping areas of the seabed and at

    a rough limestone section of seafloor at the shelf edge• C4 occurring at the base of slopes• C5 centered in the shallow reef areas

    Figure 14(a) shows the mean for each of the substrateand taxa metrics in the resulting clusters. It can be seen thatcluster C1, located along the shelf, is dominated by sandysubstrate and benthic foraminifera. Cluster C5, on the otherhand, is located in the shallow reef area and features a mixof substrate types and relatively high abundances across alltaxa. The distribution of these distinct habitats is a func-tion of the substrate as well as light available and hencedepth. Figure 14(b) shows a mapping of the clusters ontothe first two components of the principal components of

    Journal of Field Robotics DOI 10.1002/rob

  • Williams et al.: AUV Survey on the GBR • 691

    19°40'S

    0 0.5 1 Kilometers

    C1

    C2

    C3

    C4

    C5

    (a)

    0 500 1000 1500 2000 2500 3000 3500 4000

    Along track (m)

    Dep

    th (

    m)

    C1C2C3C4C5

    (b)

    Figure 13. (a) AUV track overlaid on underlying bathymetricmodel and color coded by cluster derived from substrate andtaxa composition identified by an expert. (b) Correspondingdepth profile color coded by cluster.

    the metrics. The first two principal components account formore than 60% of the variability in the data. The cluster-ing has resulted in groups that are well defined using theseprincipal components, and we were interested to evaluatehow well our visual topics would perform in classifying theimagery using these labels. Examples of the images corre-sponding to these clusters are shown in Figure 15. Figure 16shows portions of the 3D reconstruction corresponding tothis dive for each of the clusters derived based on the hand-labeled imagery.

    The hand labeling of the images takes a considerableamount of time and requires an expert trained in taxonomicanalysis. Automated classification approaches, on the otherhand, can be run without expert input once trained andcan be completed within a few hours of the completion of

    0 2 4 6 8 10 12 14 16 18 200

    2

    4

    C1

    0 2 4 6 8 10 12 14 16 18 200

    2

    4

    C2

    0 2 4 6 8 10 12 14 16 18 200

    2

    4

    C3

    0 2 4 6 8 10 12 14 16 18 200

    2

    4

    C4

    0 2 4 6 8 10 12 14 16 18 200

    5

    C5

    Substrate and taxon metric

    (a)

    0 1 2 3

    0

    1

    2

    3

    4

    PCA score 1

    PC

    A s

    ocre

    2

    C1C2C3C4C5

    (b)Figure 14. (a) Mean substrate and taxon metrics for the clus-ters identified in the data. Substrate metrics are colored in redand taxon metrics in blue. The numbering corresponds to thelabels shown in Table IV. (b) Distribution of cluster labels usingthe first two principal components of the cluster labels.

    a dive. For the data collected during this cruise, it is sim-ply intractable to hand label all of the imagery collectedby the vehicle. We therefore used the cluster labels derivedfrom the hand-labeled composition of the imagery to test anumber of classification systems based on the visual top-ics describing each image as outlined in Section 3.3. Weran the classification using a number of common classi-fiers, including a regression tree (Breiman, Friedman, Stone,& Olshen, 1984), linear discriminant analysis (Krzanowski,1988), K nearest neighbor (Mitchell, 1997) (with supportfrom the five nearest neighbors), and a support vector

    Journal of Field Robotics DOI 10.1002/rob

  • 692 • Journal of Field Robotics—2010

    (a) C1 (b) C2

    (c) C3 (d) C4

    (e) C5Figure 15. Sample images of habitat clusters derived from substrate and taxa composition.

    machine (SVM) (Chang & Lin, 2001; Hsu, Chang, & Lin,2003). A summary of the classification results is shown inFigure 17. The first column shows the type of classifierand the average estimation accuracy using 10-fold crossvalidation of the labeled data for each of the four classi-fiers tested. The middle column shows the confusion ma-trix, and the last column shows the depth profile color

    coded by images that are misclassified by each classifier(green is no error, and the strength of the red dots is pro-portional to the number of times each image is misclas-sified during cross validation). As can be seen, the re-gression tree does not perform as well as the other threeclassifiers tested for this problem. Images along the firstslope tend to be more poorly resolved, and there appear

    Journal of Field Robotics DOI 10.1002/rob

  • Williams et al.: AUV Survey on the GBR • 693

    (a) C1 (b) C2 (c) C3 (d) C4 (e) C5Figure 16. Examples of the habitat classes identified for this dive. Each strip represents an approximately 20 × 1.5 m segmentextracted from the 3D reconstruction for dive 9 and illustrates the different habitats encountered. Sample images from each classwere used to train the classifier.

    to be common images that are incorrectly classified by allclassifiers.

    These results suggest that the visual features are ableto successfully differentiate the habitat classes identifiedbased on the substrate and taxonomic metrics assignedbased on our ecological assessment. We are now develop-ing fully automated clustering and classification systemsthat can provide clustering of the imagery without the needfor expert input. One difficulty with such automated meth-ods for image clustering is the ability to assign seman-tic meaning to the resulting clusters. We are also work-ing to relate the habitat classification based on the AUVimagery to the underlying bathymetric data collected by

    the ship to facilitate the creation of broader scale habitatmodels.

    4.3. Dive 11: Brittlestar Beds

    Finally, we highlight the results of dive 11 in which the ve-hicle both surveyed a shallow portion of the reef and con-ducted broader scale surveys across a sandy dune field inthe vicinity of Hydrographers Passage. Figure 18 shows aportion of the AUV dive track overlaid on the bathymetry,with image locations featuring aggregations of brittlestarshighlighted in red. These locations are correlated with thelee side of the dunes and provide the animals with a

    Journal of Field Robotics DOI 10.1002/rob

  • 694 • Journal of Field Robotics—2010

    RegressionTree75%

    True Clust.C1 C2 C3 C4 C5

    Est

    im.

    Clu

    st.

    C1 226 2 5 23 4C2 2 62 2 9 2C3 6 4 67 8 25C4 28 14 9 91 6C5 0 0 23 9 91

    0 500 1000 1500 2000 2500 3000 3500 4000150

    140

    130

    120

    110

    100

    90

    80

    70

    60

    50

    Along track (m)

    Dep

    th (

    m)

    LinearDiscriminantAnalysis86%

    True Clust.C1 C2 C3 C4 C5

    Est

    im.

    Clu

    st.

    C1 255 2 4 15 1C2 0 72 1 1 1C3 1 2 80 5 17C4 4 6 5 115 12C5 2 0 16 4 97

    0 500 1000 1500 2000 2500 3000 3500 4000150

    140

    130

    120

    110

    100

    90

    80

    70

    60

    50

    Along track (m)

    Dep

    th (

    m)

    KNN87%

    True Clust.C1 C2 C3 C4 C5

    Est

    im.

    Clu

    st.

    C1 256 1 5 16 1C2 0 73 1 1 2C3 4 1 75 3 13C4 0 6 6 114 9C5 2 1 19 6 103

    0 500 1000 1500 2000 2500 3000 3500 4000150

    140

    130

    120

    110

    100

    90

    80

    70

    60

    50

    Along track (m)

    Dep

    th (

    m)

    SVM85%

    True Clust.C1 C2 C3 C4 C5

    Est

    im.

    Clu

    st.

    C1 259 1 5 26 1C2 0 71 1 1 0C3 1 3 72 5 12C4 0 7 6 105 11C5 2 0 22 3 104

    0 500 1000 1500 2000 2500 3000 3500 4000150

    140

    130

    120

    110

    100

    90

    80

    70

    60

    50

    Along track (m)

    Dep

    th (

    m)

    Figure 17. A summary of classification results.

    vantage point from which to feed (Byrne, 2009). Based onthe georeferenced imagery provided by the vehicle, a grabsampler was used to recover samples of these animals forfurther study. Samples of the brittlestars were recoveredfrom depths of approximately 60–70 m in locations pre-dicted to feature brittlestars based on the AUV imagery.Sampling conducted at locations where no brittlestars wereseen did not feature any animals, confirming the accuracyof the georeferencing of the imagery and illustrating howthe AUV data were used to inform further sampling.

    5. CONCLUSIONS

    Properly instrumented AUVs can fill an important niche aspart of multidisciplinary research cruises such as the onedescribed here. They present a novel tool for collecting rich,high-resolution, georeferenced data sets that are comple-mentary to more traditional shipborne measurements. Inthis case the imagery produced by the AUV has proven to

    be invaluable in providing very high-resolution, fine-scaleinterpretation of benthic habitat characteristics. The AUVprovided a platform to obtain quantitative information onbiological communities and habitats beyond the depths ac-cessible by traditional methods.

    The visual SLAM techniques have allowed us to com-pensate for errors in the navigation induced by dead-reckoning drift. The algorithms have been validated usinglarge-scale data sets collected as part of a scientific expedi-tion to survey drowned reefs. Although validating naviga-tion systems was not the primary goal of this expedition,we have demonstrated how SLAM can be used to generatea consistent, georeferenced navigation solution suitable forcreating detailed 3D models of the seafloor. These modelsare now being used to investigate the relationship betweenfine-scale seabed structure and the benthic organisms theysupport.

    Just as important as a reliable and capable platformis the ability to quickly produce consistent and accurate

    Journal of Field Robotics DOI 10.1002/rob

  • Williams et al.: AUV Survey on the GBR • 695

    (a) Brittlestar locations

    (b) BrittlestarsFigure 18. (a) Brittlestar locations identified in red overlaidon underlying bathymetry. Notice that the aggregations tend tocorrespond to the tops of the dune features (b) A portion of the3D model of the seafloor highlighting the density of brittlestarson the dunes. This represents an area of aproximately 3.0 ×1.5 m featuring hundreds of animals

    representations of the survey data. This allows for feed-back into cruise planning. We have illustrated how auto-mated image classification tools can be used to quicklysummarize the distribution of substrate types seen duringa particular dive. The volume of data collected by systemssuch as this will require much of the preliminary interpreta-tion to be completed automatically, allowing experts to con-centrate on detailed investigation of particular features ofinterest.

    APPENDIX: INDEX TO MULTIMEDIA EXTENSIONS

    A video is available as Supporting Information in the onlineversion of this article.

    Extension Media type Description

    1 Video A video summarizing the cruiseresults. The video shows asample sequence of imagescaptured by the AUV and theresulting SLAM solution. 3Dseafloor models are also shownfor the dives described in thepaper.

    ACKNOWLEDGMENTS

    This work is supported by the ARC Centre of Excellenceprogramme, funded by the Australian Research Coun-cil (ARC) and the New South Wales State Government,the Integrated Marine Observing System (IMOS) throughthe Department of Innovation, Industry, Science and Re-search (DIISR) National Collaborative Research Infrastruc-ture Scheme, the Australian Marine National Facility, andNational Geographic. RJB acknowledges a QueenslandSmart Futures Fellowship for salary support. The authorswould like to thank the captain and crew of the RV South-ern Surveyor. Without their sustained efforts none of thiswould have been possible. Thanks to Peter Davies, KateThornborough, Erika Woolsey, Sandy Tudhope, Phil Man-ning, Alex Thomas, and the CSIRO techs for help and sup-port onboard the ship. We also acknowledge the help of allthose who have contributed to the development and opera-tion of the AUV, including Duncan Mercer, George Powell,Ritesh Lal, Paul Rigby, Jeremy Randle, Bruce Crundwell,and the late Alan Trinder.

    REFERENCES

    Bare, A., Grimshaw, K., Rooney, J., Sabater, M., Fenner, D., &Carroll, B. (2010). Mesophotic communities of the insularshelf at Tutuila, American Samoa. Coral Reefs, 29, 369–377.

    Barker, B. A. J., Helmond, I., Bax, N. J., Williams, A., Davenport,S., & Wadley, V. A. (1999). A vessel-towed camera plat-form for surveying seafloor habitats of the continentalshelf. Continental Shelf Research, 19, 1161–1170.

    Beaman, R., Webster, J., & Wust, R. (2008). New evidencefor drowned shelf edge reefs in the Great Barrier Reef,Australia. Marine Geology, 247(1–2), 17–34.

    Breiman, L., Friedman, J., Stone, C., & Olshen, R. (1984). Classi-fication and regression trees. New York: Chapman & Hall.

    Burt, P. J., & Adelson, E. H. (1983). A multiresolution splinewith application to image mosaics. ACM Transactions onGraphics, 2(4), 217–236.

    Byrne, M. (2009). Flashing stars light up the Reefs shelf. ECOSMagazine, 150, 28–29.

    Camoin, G., Cabioch, G., Eisenhauer, A., Braga, J., Hamelin,B., & Lericolais, G. (2006). Environmental significance of

    Journal of Field Robotics DOI 10.1002/rob

  • 696 • Journal of Field Robotics—2010

    microbialites in reef environments during the lastdeglaciation. Sedimentary Geology, 185(3–4), 277–295.

    Camoin, G., Yasufmui, I., & McInroy, D. (2005). IntegratedOcean Drilling Program Expedition 310 Scientific Prospec-tus: The last deglacial sea level rise in the South Pa-cific: Offshore drilling in Tahiti (French Polynesia) (Tech.Rep.). Integrated Ocean Drilling Program. AccessedJuly 3, 2010, at http://publications.iodp.org/preliminary

    report/310/.Chang, C.-C., & Lin, C.-J. (2001). LIBSVM: a library for sup-

    port vector machines. Software available at http://www.csie.ntu.edu.tw/ cjlin/libsvm. Accessed July 3, 2010.

    Clark, J. H. (1976). Hierarchical geometric models for visiblesurface algorithms. Communications of the ACM, 19(10),547–554.

    CSIRO MNF RV Southern Surveyor (2008). CSIRO marine na-tional facility RV Southern Surveyor facilities and equip-ment. Retrieved February 9, 2008, from http://www.marine.csiro.au/nationalfacility/features/facilities.htm.

    Curless, B., & Levoy, M. (1996, August). A volumetric methodfor building complex models from range images. In SIG-GRAPH ’96: Proceedings of the 23rd Annual Conferenceon Computer Graphics and Interactive Techniques, NewOrleans, LA (pp. 303–312), New York: Association forComputing Machinery.

    Davies, P., Braga, J., Lund, M., & Webster, J. (2004). Holocenedeep water algal buildups on the Eastern Australian shelf.PALAIOS, 19(6), 598–609.

    Davies, P., & Montaggioni, L. (1985, May–June). Reef growthand sea level change: The environmental signature. InProceedings of the Fifth International Coral Reef Sympo-sium, Tahiti, French Polynesia (vol. 3, pp. 477–515).

    Dissanayake, M., Newman, P., Clark, S., Durrant-Whyte, H., &Csobra, M. (2001). A solution to the simultaneous local-ization and map building (SLAM) problem. IEEE Transac-tions on Robotics and Automation, 17(3), 229–241.

    Done, T. (1982). Patterns in the distribution of coral communi-ties across the central Great Barrier Reef. Coral Reefs, 1(2),95–107.

    Durrant-Whyte, H., & Bailey, T. (2006). Simultaneous locali-sation and mapping (SLAM): Part I—The essential algo-rithms. Robotics and Automation Magazine, 13, 99–110.

    Eustice, R., Singh, H., Leonard, J., & Walter, M. (2006). Visuallymapping the RMS Titanic: Conservative covariance esti-mates for SLAM information filters. International Journalof Robotics Research, 25(12), 1223–1242.

    Fairbanks, R. (1989). A 17000 year glacio-esutatic sea-levelrecord: Influence of glacial melting rates on the YoungerDryas event and deep ocean circulation. Nature, 342, 637–642.

    Gelb, A. (1996). Applied optimal estimation, 14th ed.Cambridge, MA: MIT Press.

    Grasmueck, M., Eberli, G. P., Viggiano, D. A., Correa, T.,Rathwell, G., & Luo, J. (2006). Autonomous underwatervehicle (AUV) mapping reveals coral mound distribution,morphology, and oceanography in deep water of theStraits of Florida. Geophysical Research Letters, 33,L23616.

    Harris, P., & Davies, P. (1989). Submerged reefs and terraces onthe shelf edge of the Great Barrier Reef Australia. CoralReefs, 8, 87–98.

    Hartley, R., & Zisserman, A. (2000). Multiple view geometry incomputer vision. Cambridge, UK: Cambridge UniversityPress.

    Henthorn, R., Caress, D., Thomas, H., McEwen, R., Kirkwood,W., Paull, C., & Keaten, R. (2006, September) High-resolution multibeam and subbottom surveys of subma-rine canyons, deep-sea fan channels, and gas seeps usingthe mbari mapping AUV. In Proceedings of MTS/IEEEOCEANS 2006, Boston, MA (pp. 1–6).

    Holmes, K., Niel, K. V., Radford, B., Kendrick, G., & Grove, S.(2008). Modelling distribution of marine benthos from hy-droacoustics and underwater video. Continental Shelf Re-search, 28, 1800–1810.

    Hopley, D. (2006). Coral reef growth on the shelf margin of theGreat Barrier Reef with special reference to the pompeycomplex. Journal of Coastal Research, 22, 150–158.

    Hopley, D., Graham, T., & Rasmussen, C. (1996, June). Sub-merged shelf-edge reefs, coral reefs, Great Barrier ReefAustralia. In PACON 96. Recent Advances in Marine Sci-ence and Technology, Honolulu, HI (pp. 305–315).

    Hsu, C.-W., Chang, C.-C., & Lin, C.-J. (2003). A practicalguide to support vector classification. http://www.csie.ntu.edu.tw/∼cjlin/papers/guide/guide.pdf. AccessedJuly 3, 2010.

    Johnson-Roberson, M., Pizarro, O., Williams, S., & Mahon, I.(2010). Generation and visualization of large-scale three-dimensional reconstructions from underwater robotic sur-veys. Journal of Field Robotics, 27, 21–51.

    Kahng, S., & Kelley, C. (2007). Vertical zonation of megaben-thic taxa on a deep photo-synthetic reef (50–140 m) in theAuau Channel, Hawaii. Coral Reefs, 26, 679–687.

    Krzanowski, W. (1988). Principles of multivariate analysis.Oxford, UK: Oxford University Press.

    Kunz, C., Murphy, C., Camilli, R., Singh, H., Eustice, R.,Roman, C., Jakuba, M., Willis, C., Sato, T., Nakamura, K.,Sohn, R., & Bailey, J. (2008, September). Deep sea under-water robotic exploration in the ice-covered arctic oceanwith AUVs. In IEEE/RSJ International Workshop on In-telligent Robots and Systems, Nice, France (pp. 3654–3660).

    Lowe, D. (1999, September). Object recognition from localscale-invariant features. In International Conference onComputer Vision (2), Corfu, Greece (pp. 1150–1157).

    Ludwig, K., Szabo, B., Moore, J., & Simmons, K. (1991).Crustal subsidence rate off Hawaii determined from234U/238U ages of drowned coral reefs. Geology, 19, 171–174.

    Mahon, I. (2008). Vision-based navigation for autonomous un-derwater vehicles. Ph.D. thesis, University of Sydney.

    Mahon, I., Williams, S. B., Pizarro, O., & Johnson-Roberson,M. (2008). Efficient view-based SLAM using visual loopclosures. IEEE Transactions on Robotics and Automation,24(5), 1002–1014.

    Maronna, R., Martin, R., & Yohai, V. (2006). Robust statistics:Theory and methods. Chichester, UK: Wiley.

    Journal of Field Robotics DOI 10.1002/rob

  • Williams et al.: AUV Survey on the GBR • 697

    Marthiniussen, R., Vestgard, K., Klepaker, R., & Storkersen, N.(2004). HUGIN-AUV concept and operational experiencesto date. In Proceedings MTS/IEEE Oceans 2004 (vol. 2,pp. 846–850).

    McEwen, R., Caress, D., Thomas, H., Henthorn, R., &Kirkwood, W. (2006). Performance of an autonomous un-derwater vehicle while mapping smooth ridge in Mon-terey Bay. In ASLO/TOS/AGU Ocean Sciences Meeting,Boston, MA.

    Mitchell, T. (1997). Machine learning. Boston, MA: McGraw–Hill.

    Nister, D., & Stewenius, H. (2006, June). Scalable recogni-tion with a vocabulary tree. 2006 IEEE Computer SocietyConference on Computer Vision and Pattern Recognition,New York (vol. 2, pp. 2161–2168).

    Philbin, J., Chum, O., Isard, M., Sivic, J., & Zisserman, A. (2007,October). Object retrieval with large vocabularies and fastspatial matching. Proceedings IEEE CVPR, Rio de Janeiro,Brazil.

    Pizarro, O., Colquhoun, J., Rigby, P., Johnson-Roberson, M., &Williams, S. (2008, September). Towards image-based ma-rine habitat classification. In Proceedings of MTS/IEEEOceans 2008, Quebec, PQ, Canada (pp. 1–7).

    Pizarro, O., Eustice, R., & Singh, H. (2003, December). Relativepose estimation for instrumented, calibratedimaging plat-forms. In Digital Image Computing Techniques and Ap-plications, Sydney, Australia (pp. 601–612).

    Pizarro, O., Williams, S., & Colquhoun, J. (2009, May). Topic-based habitat classification using visual data. In Pro-ceedings of MTS/IEEE Oceans 2009, Bremen, Germany(pp. 1–8).

    Roman, C., & Singh, H. (2007). A self-consistent bathymet-ric mapping algorithm. Journal of Field Robotics, 24(1–2),23–50.

    Singh, H., Armstrong, R., Gilbes, F., Eustice, R., Roman, C.,Pizarro, O., & Torres, J. (2004). Imaging coral I: Imagingcoral habitats with the SeaBED AUV. Subsurface SensingTechnologies and Applications, 5, 25–42.

    Singh, H., Can, A., Eustice, R., Lerner, S., McPhee, N., Pizarro,O., & Roman, C. (2004). SeaBED AUV offers new platform

    for high-resolution imaging. EOS, Transactions of theAmerican Geophysical Union, 85(31), 289, 294–295.

    Sivic, J., & Zisserman, A. (2003, October). Video Google: A textretrieval approach to object matching in videos. In Pro-ceedings of the International Conference on Computer Vi-sion, Nice, France (vol. 2, pp. 1470–1477).

    Webster, J., Beaman, R., Bridge, T., Davies, P., Byrne, M.,Williams, S., Manning, P., Pizarro, O., Thornborough,K., Thomas, A., A., & Tudhope, S. (2008). From coralsto canyons: The Great Barrier Reef margin. EOS, Trans-actions of the American Geophysical Union, 89(24),217–218.

    Webster, J., Clague, D., Riker-Coleman, K., Gallup, C., Braga,J., Potts, D., Moore, J., Winterer, E., & Paull, C. (2004).Drowning of the −150 m reef off Hawaii: A casualty ofglobal meltwater pulse 1A? Geology, 32, 249–252.

    Webster, J., Davies, P., Beaman, R., Williams, S. B., & Byrne, M.(2008). RV Southern Survey—Voyage summary SS072007(Tech. Rep.). CSIRO Marine National Facility. http://www.marine.csiro.au/nationalfacility/voyagedocs/2007/summarySS07-2007.pdf. Accessed July 3, 2010.

    Webster, J., Wallace, L., Silver, E., Potts, D., Braga, J., Renema,W., Riker-Coleman, K., & Gallup, C. (2004). Coralgalcomposition of drowned carbonate platforms in the HuonGulf, Papua New Guinea; Implications for lowstand reefdevelopment and drowning. Marine Geology, 204, 59–89.

    Williams, S. B., & Mahon, I. (2004, April–May). Simultaneouslocalisation and mapping on the Great Barrier Reef. InProceedings IEEE International Conference on Roboticsand Automation, New Orleans, LA (vol. 2, pp. 1771–1776).

    Williams, S. B., Pizarro, O., Mahon, I., & Johnson-Roberson,M. (2008, July). Simultaneous localisation and map-ping and dense stereoscopic seafloor reconstruction us-ing an AUV. Experimental Robotics: The 11th Interna-tional Symposium, Athens, Greece (vol. 54, pp. 407–416).

    Yoerger, D., Jakuba, M., Bradley, A., & Bingham, B. (2007).Techniques for deep sea near bottom survey using anautonomous underwater vehicle. International Journal ofRobotics Research, 26, 41–54.

    Journal of Field Robotics DOI 10.1002/rob