3
Appendix A.- The TRIDENT Roadmap Concerning the “system trialling and demonstration towards the end of the project” recommendation, some steps have been done to implement a TRIDENT roadmap. Obviously, the next information is dynamic and, possibly will require some updates during the coming weeks, but it is a complete global picture to better understanding the necessary integration efforts under TRIDENT. Figue 1, is representing all the work to implement under TRIDENT, including both, the finished tasks and the pending ones. The emphasis is put in the experiments, so you can find in Table 1, the explanation about each designed experiment along the TRIDENT life. Fig. 1. TRIDENT Project Implementation Plan

Appendix A.- The TRIDENT Roadmap - UJIsanzp/trident/AppendixA-TRIDENT-Roadmap.pdf · Girona, it was demonstrated the how to build small photomosaic as well as how to select an object

  • Upload
    others

  • View
    26

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Appendix A.- The TRIDENT Roadmap - UJIsanzp/trident/AppendixA-TRIDENT-Roadmap.pdf · Girona, it was demonstrated the how to build small photomosaic as well as how to select an object

Appendix A.- The TRIDENT Roadmap

Concerning the “system trialling and demonstration towards the end of the project”

recommendation, some steps have been done to implement a TRIDENT roadmap.

Obviously, the next information is dynamic and, possibly will require some updates

during the coming weeks, but it is a complete global picture to better understanding the

necessary integration efforts under TRIDENT.

Figue 1, is representing all the work to implement under TRIDENT, including both, the

finished tasks and the pending ones. The emphasis is put in the experiments, so you can

find in Table 1, the explanation about each designed experiment along the TRIDENT

life.

Fig. 1. TRIDENT Project Implementation Plan

Page 2: Appendix A.- The TRIDENT Roadmap - UJIsanzp/trident/AppendixA-TRIDENT-Roadmap.pdf · Girona, it was demonstrated the how to build small photomosaic as well as how to select an object

E1. AUV-based Photomosaic From A Moored Boat July 2010

Objective: Test the capability of building a small area photomosaic using an AUV launched from a

moored boat.

Experiment: SPARUS AUV was used during the Azores FREESUBNET final workshop to carry out a

seafloor survey of an small area of Monte the guia Bay in Horta (Faial).

E2. Bathymetry From A Boat September 2010

Objective: Test the capability of building bathymetries using the navigation and mapping sensor suite to

be mounted on G500 AUV.

Experiment: During the “International Interdisciplinary Field Training of Marine Robotics and

Applications” BTS2010 event, the navigation & mapping sensor suite of the G500 AUV was mounted

on a torpedo-shaped sensor ridge which was attached to a manned boat. The survey was conducted in the

Kornati Islands Archipielago.

E3. Fixed-Based Manipulation September 2010

Objective: Demonstrate the capability of controlling the robotic arm alone to perform a multisensory

intervention task

Experiment: The 4DOF arm owned by UJI partner was mounted on a fixed base structure and

submerged in a water tank. A down looking camera was used for detecting and tracking a black box

mockup. Object hooking in presence of perturbation was demonstrated.

E4. Object Recovery In A Water Tank May 2011

Objective: Test in a water tank the visual mapping capabilities of G500 as well as its capabilities for

object recovery through hooking in a controlled environment (water tank).

Experiment: During the 1st year review of the TRIDENT project held at CIRS in the university of

Girona, it was demonstrated the how to build small photomosaic as well as how to select an object of

interest and launch the I-AUV to hook it. The experiment was executed using a 4DOF arm with a

gripper available at the UJI partner.

E5. Cooperative Navigation July 2011

Objective: Acquiring the cooperative navigation data-set corresponding to D1.1.

Experiment: On July the 26th the Navigation Dataset for the Navigation WP of the TRIDENT EU

project was collected in Cala Joncols in Roses, Girona. The ASC was represented by a surface manned

boat equipped and USBL-AHRS-DGPS-ACOM sensor suite. The G500 was equipped with its standard

navigation sensor suite. Position fixes gathered with the USBL were captures at the surface craft and

forwarded to the AUV through the acoustic modem.

E6. 1st Object Recovery At Sea. Robot Cooperation. October 2011

Objective: Reproduce experiment E6 at sea in completely autonomous manner. Target ASC/IAUV

cooperation during the survey phase.

Experiment: 1) E6. Experiment will be reproduced in a harbor removing the umbilical and having the

mechatronics completely integrated 2) An AUV will be used at surface playing the role of the ASC. A

Surface boat will be used for the USBL tracking as well as to forward the I-AUV position to it through

the acoustic modem for cooperative navigation as well as to the ASC for cooperative guidance during

the survey.

E7. Dexterous Object Recovery In A Water Tank June 2011

Objective: Assess the final mechatronic integration of the I-AV system

Experiment: This experiment will be a reproduction of the experiment E4 reported above but using the

final mechatronics developed for the TRIDENT project.

E.8 Seafloor Mapping September 2012

Objective: Demonstrate the survey phase of the proposed multipurpose intervention methodology

Experiment: The ASC and the I-AUV will localize themselves through cooperative navigation

methods. Both vehicle will perform cooperative path following. The I-AUV will gather optical images

and bathymetric profile. An Ortho-photomosaic and a bathymetry of the area will be produced. The

experiment will be target a shallow water area to simplify the logistics (<30 m depth).

E9.Intervention January 2012

Objective: Demonstrate the intervention phase of the proposed multipurpose intervention methodology

Experiment: The ASC and the I-AUV will localize themselves through cooperative navigation

methods. The ASC/I-AUV team will navigate to the target area, then the I-AUV will perform an object

search. Once the object is located in the camera field of view, the I-AUV will perform free-floating

navigation to perform the multisensory based intervention.

Table 1. TRIDENT Experimental Roadmap

Page 3: Appendix A.- The TRIDENT Roadmap - UJIsanzp/trident/AppendixA-TRIDENT-Roadmap.pdf · Girona, it was demonstrated the how to build small photomosaic as well as how to select an object

Furthermore, in Figure 2, is depicted the plan for "system trialling and demonstration"

planned for the imminent TRIDENT school in October.

FIG 2. Experimental plan for the TRIDENT SCHOOL

Finally, in Figure 3, is depicted the implementation Roadmap from the WP’s point of

view, remarking the initially designed Milestones.

Fig. 3 TRIDENT WP implementation plan

2010

MAR

ABR

MAY

JUN

JUL

AGO

SEP

OCT

NOV

DEC

2011

ENE

FEB

MAR

ABR

MAY

JUN

JUL

AGO

SEP

OCT

NOV

DEC

2012

ENE

FEB

MAR

ABR

MAY

JUN

JUL

AGO

SEP

OCT

NOV

DEC

2013

ENE

FEB

18 24 33 34103 12 21

! " #$%&! '" ( ) *+ ) , '- . ( , ( /)0'1234'

'

$ 5678797( 8'( 6': ; 58) . 7( <'! " #$%&' #()&%*+, &%- &#+%- . +/01234/+56' 7. $%+%- (%+- (8+%' +9. +: . ; ' #8%6(%. : +, &%- +(%+). (8%+' #. +<6' %' %*<. +' 6+. =<. 6&; . #%+

+

- . ( , ( /5* '. ( ) *+ ) , <'

'

/- &8+&8+(+6' ( : ; (<+<6' <' 8()+&#+%. 6; 8+' >+: &>>. 6. #%+8$. #(6&' 8+(#: +%- . &6+: . <. #: . #$&. 8?+@' 6&A' #%()+(=&8+6. <6. 8. #%8+%- . +%&; . )&#. B+>6' ; +%- . +9. C&##&#C+%' +%- . +. #: +' >+%- . +<6' 7. $%?++D. 6%&$()+(=&8+&8+: &E&: . : +&#+%- . +: &>>. 6. #%+, ' 6F<($F(C. 8B+()%- ' " C- +%- . 8. +: &E&8&' #8+%. #: +%' +: &8(<<. (6+%' +%- . +. #: +' >+%- . +<6' 7. $%B+&#: &$(%&#C+; ' 6. +&#%. C6(%&' #?++3). ; . #%8+&#8&: . +%- . +6' ( : ; (<+(6. +: &>>. 6. #%+8$. #(6&' 8+%- (%+: . ; ' #8%6(%. +%- . +<6' C6. 88+' >+/01234/?+G' ; . +' >+%- . ; +$' 66. 8<' #: +%' +/01234/+; &). 8%' #. 8+' 6+8&C#&>&$(#%+<6' C6. 88+%- (%+8- ' " ): +9. +: . ; ' #8%6(%. : +%' +%- . +3H?+/- . 8. +(6. +#" ; 9. 6. : +(#: +; (6F. : +&#+6. : ?+/- . +6. 8%+' >+I#' #J; (6F. : K+8$. #(6&' 8+&#: &$(%. +<6' C6. 88+(%+(+, ' 6F<($F(C. +). E. )B+(#: +()%- ' " C- +%- . *+, ' " ): +#' %+#. . : +%' +9. +>' 6; ())*+: . ; ' #8%6(%. : +%' +%- . +3HB+%- . *+(6. +$6" $&()+>' 6+($- &. E&#C+%- . +6. 8%+' >+8$. #(6&' 8?++

L66' , 8+&#: &$(%. +: . <. #: . #$&. 8+(; ' #C+8$. #(6&' 8?+! ' 6+. =( ; <). B+8$. #(6&' +M+$(#+' #)*+9. +($$' ; <)&8- . : +&>+%- . +E. - &$). B+(6; +(#: +- (#: +(6. +(E(&)(9). ?+N#$. +8$. #(6&' +M+&8+($$' ; <)&8- . : B+8$. #(6&' +O+$(#+9. +($- &. E. : +%' ' ?+

/- . +$' #8' 6%&" ; +>. . : 9($F+&8+#. . : . : +&#+' 6: . 6+%' P+

KNOWLEDGE-BASEDAPPROACH

HILSIMULATION FIXED-BASEMANIPULATION

WP7

M2

AIRTARGETPERTURBATION

UNDERWATERARMPERTURBATION

M5

FREE-FLOATINGMANIPULATION

WP6

WP5

ARM/HANDDESIGN INTEGRATEDARM/HAND

M3

AUV/ARM/HANDREADYARM/HANDREADY

FREEFLOATING

IST-M1

COOPERATIVENAVIGATION

M4

SEAFLOORMAPPING

WP4

D4.1 Visual and acoustic image processors TRIDENT

32

3. Accuracy. To minimize false detections and misses the descriptors have to

separate the target from the background precisely.

As the appearance of the scene is not known a-priori we will provide the system with a set

of possible descriptors and let it decide based on the image with the marked target which

descriptors to use for best classification results.

Up to now we made some experiments using the hue and saturation channels from the

HSV color-space as color descriptors. Using the image with the marked target for training,

the histogram of color and saturation is computed for the target. This histogram is invariant

against translation, rotation and brightness and its computation time is minimal. In the

detection phase, the histogram is back-projected on the camera image, which results in a

probability image where each pixel represents the probability of belonging to the object. As

can be seen in figure 12 objects that have a dominant representative color that is different

from colors of the background can be easily detected this way.

Figure12: Marked object (left) and color classification result (right)

Problems arise when the object contains colors that are background colors as well. We

cope with this problem by choosing only those colors as object description that does not

occur in the background (see figure 13).

To filter out camera artifacts and small false detections we use morphological operators.

Figure13: Object outline that contains background colors (left), color classification that uses all

colors (center) and color classifiation that only uses those colors that are representative for the

object (right)

The next step will be to extend this approach for arbitrary descriptors.

The binary or probability image that results from classification has to be post-processed to

result in an estimation of the object position.

TARGETIDENTIFICATION

D4.1 Visual and acoustic image processors TRIDENT

32

3. Accuracy. To minimize false detections and misses the descriptors have to

separate the target from the background precisely.

As the appearance of the scene is not known a-priori we will provide the system with a set

of possible descriptors and let it decide based on the image with the marked target which

descriptors to use for best classification results.

Up to now we made some experiments using the hue and saturation channels from the

HSV color-space as color descriptors. Using the image with the marked target for training,

the histogram of color and saturation is computed for the target. This histogram is invariant

against translation, rotation and brightness and its computation time is minimal. In the

detection phase, the histogram is back-projected on the camera image, which results in a

probability image where each pixel represents the probability of belonging to the object. As

can be seen in figure 12 objects that have a dominant representative color that is different

from colors of the background can be easily detected this way.

Figure12: Marked object (left) and color classification result (right)

Problems arise when the object contains colors that are background colors as well. We

cope with this problem by choosing only those colors as object description that does not

occur in the background (see figure 13).

To filter out camera artifacts and small false detections we use morphological operators.

Figure13: Object outline that contains background colors (left), color classification that uses all

colors (center) and color classifiation that only uses those colors that are representative for the

object (right)

The next step will be to extend this approach for arbitrary descriptors.

The binary or probability image that results from classification has to be post-processed to

result in an estimation of the object position.

TARGETLOCALIZATION

improvements related to the selection of the optimal feature descriptor for a specific need

(station keeping, odometry, target characterization,…), a specific kind of scene or under specific

illumination conditions, to name but a few.

Concerning the tracking of features, which is the basis for visual motion estimation, the method

under development finds the best subset of matching key points (features) between images to

calculate the projective transformation between those images (see figure iii). When an image

overlaps with more than one previous image, combining several transformations can refine the

motion estimate. This type of motion estimate is similar to what can be achieved with inertial

sensors and with the acoustic Doppler, and together with these other sensors forms the input to

the navigation module.

Something that cannot be achieved with these other methods is a drift free pose estimate with

regard to an arbitrary reference frame prior in time. We provide two types of such pose

estimates. The first type is provided during the survey phase and consists of a pose estimate with

regard to images shot shortly before or after an arbitrary point in time. The second type is

provided during the intervention phase and consists of a pose estimate with regard to the target

area.

The vision module does not build or keep an internal map of the survey path. This is the task of

the navigation module, which has access to additional sensor data. Neither can the vision module

afford to match every image against every other image in order to report whenever the survey

path has crossed itself. Instead, whenever the navigation module concludes that the survey path

should have crossed itself, the vision module must be queried to verify that the robot is currently

hovering over a location that was visited at a specific point in time. If that is the case, the vision

module can give a precise pose estimate with regard to that point in time, which allows the

navigation unit to correct for drift. If however there is no match between the current frame and

the reference frame(s), and the navigation module decides that this is a navigational error that

needs recovery, the vision module can also be queried about matches between images at other

points in time.

In order to facilitate such database like behaviour, the developed architecture combines lazy

execution (time consuming executions are delayed until the result is actually queried, and never

executed if the result is never queried) with a variety of caching algorithms that store

intermediate and final results if they can be reused for other computations or other queries.

The architecture has been tested on different sequences that simulate station keeping and survey

situations and good results have been obtained.

Other tracking techniques, based on particle filters, are under investigation and we expect

promising results in the next months.

Fig. iii Feature correspondence test between consecutive images in a sequence

VISUALODOMETRY

OBJECTSEARCH

SENSORRIGEBASEDBATHYMETRY

MOORED-BOATPHOTO-MOSAIC

WATER-TANKPHOTO-MOSAIC

BOATFOLLOWSTHEAUV/MAPPING

WP3

Figure 5. Knowledge integration

At lowest level (centre of the Figure 2), there are actions from transducers (i.e. sensors and

actuators). In the next upper level, there are tasks from devices that play a role as actors. In the next

upper level, there are operations carried out by vehicles which play a role as agents. At the highest

level, there are missions carried out by group of vehicles that play a role as holons (multi-agents).

The basic robotics architecture layers (deliberation, execution, and behaviour) can be placed between levels.

This part of the work is now completed. More details can be found in deliverable 3.1 that has been

issued in December 2010.

1.1.2. Task 3.2: Conops and agent identification

1.1.1.1 This task aimed at identifying the main elements of the system in terms of concepts of

operations and mapping them to a set of services. This was done using a set of use cases. The usse case for the high level missions is shown in Figure 6.

KNOWLEDGEREPRESENTATION

WP2

WP1

BOTTOMPATHFOLLOWING

LEADERFOLLOWING

HOMINGCONTROLLER

DOCKINGCONTROLLER