1
ARE: Augmented Reality Environment for Mobile Robots Mario Gianni & Federico Ferri & Fiora Pirri ALCOR Lab, DIAG, Sapienza University of Rome Contact Information: ALCOR Lab, DIAG Sapienza University of Rome Via Ariosto 25, 00185 Rome, Italy Site: www.dis.uniroma1.it/alcor Phone: +39 0677274155 Email: [email protected] Abstract ARE is a development tool which allows cognitive robotics modelers to construct, at real-time, complex planning scenarios for robots, eliminating the need to model the dynamics of both the robot and the real environment as it would be required by whole simulation environments. The framework also builds a world model representation that serves as ground truth for training and validating algorithms for vision, motion planning and control. The AR-based framework is applied to evaluate the capability of the robot to plan safe paths to goal locations in real outdoor scenarios, while the planning scene dynamically changes, being augmented by virtual objects. Introduction Augmented Reality (AR) has recently stepped beyond the usual scope of appli- cations like machine maintenance, military training and production [1]. AR facilitates robot programming [2]. AR allowing cognitive robotics developers to design a variety of complex scenarios involving real and virtual components, in a flexible manner [6, 3]. AR eliminates the need to model the entire environment as it supplements the real world instead of replacing it. AR facilitate experiments bypassing complex simulation models, expensive hardware setup and a highly controlled environment, in the various stages of a cognitive robot development. Figure 1: Degree of Complexity Objectives In this work, we present an AR-based simulation framework which allows robot developers to build on-line an Augmented Reality Environment (ARE) for real robots, integrated into the visualization interface of Robot Operating System (ROS) [5]. The system goes beyond an interface for drawing objects, as the de- sign exploits a stochastic model regulating both the life-cycle and the behavior of the artefacts. Furthermore, a degree of certainty of the existence and behav- iors of the artefacts, with respect to what the robot perceives and knows about its space, can be tuned according to the experiment needs. AR-based Simulation Framework The AR-based simulation framework registers virtual objects, such as robots, cars, people, pallets and other kind of obstacles into the real environment. Figure 2: Artefacts The AR-based simulation framework includes: The model of the real world. The model of the Artefacts. The AR-builder server. The real world model The real world model comprises both the 2D occupancy grid map M 2D and the octree-based 3D map M 3D [7, 8]. Figure 3: World Model In addition a polygonal mesh S E is used to geometrically represent the environ- ment. This model is compact and better suited for Updating. Detecting collisions. Handling occlusions. The Artefacts model An artefact A is a dynamic object, defined by the following components: The properties Q = {l, b, p(., t), q(., t), Φ}. The life-cycle model {N (t)|t 0}. The behavioral model H A (t). The polygonal mesh S A . Figure 4: Artefacts components Artefact properties are sampled in the space Q of all possible tuples, defining an artefact type, according to a mixture of n Poisson distributions: Pr (X = k |λ 1 ,...,λ n )= n X i=1 π i λ k i k ! exp(-λ i ) The life-cycle of the artefacts, namely the arrivals and the leavings of the arte- facts in M 2D , is ruled by a time-homogeneous irreducible, continuous-time Markov chain {N (t)|t 0} with state space N + , whose stationary transition probabilities p ij t) between state i and state j , for infinitesimal lapses of time Δt, are given by p ij t) := P(N (t t)= j |N (t)= i)= = λν (M 2D t + ot) if j = i +1 1 - (λν (M 2D )+ μt + ot) if j = i μΔt + ot) if j = i - 1 ot) if |j - i| > 1 {N (t)|t 0} can be seen as a special case of a birth-death process [4], with birth rates λ i = λν (M 2D ) and death rates μ i = μ, for each i N + . The behavior of an artefact is determined by a finite-horizon Markov Decision Process H A (t)= {D, S, A s ,p t (·|s, α),r t (·|s, α): t D, s S, α ∈A s } D = {0,...,d} set of decision epochs. S M 2D ×{0, π 2 ,π, 3 2 π, 2π } state space. •A s set of possible actions. α : Π × S 7(k 1 ,...,k m ) > behavior-motion mapping. H A (t) selects the motion actions up to the time horizon d, according to the underlying action policy. The AR-builder server The AR-builder server interconnects the real environment model together with the simulation model of the artefacts. The AR-builder server relies on the tf software library in order to suitably register the artefacts into the real environ- ment. The AR-builder server correctly places an artefact within the real envi- ronment, by both projecting its bounding box b on M 2D and concatenating the vertexes of the polygonal mesh S A to the voxels of M 3D . Figure 5: The augmented model of the real world The AR-builder server implements a collision detection algorithm, based on pairwise hit-testing. Collisions are resolved by either moving back the artefact to its last known safe pose or by allowing the artefact to move up to a safe dis- tance. The AR-builder server checks occlusion effects by implementing a ray tracing version of the z-buffer algorithm. The algorithm relies of the following structures: The view model F view = {P, (W, d max )} of the real robot. The occlusion matrix Z S 3D = {d k i,j |d k i,j [0,d max ] , i, j } of the implicit surface of the polygonal mesh S 3D of the real environment. The occlusion matrix Z A of each artefact A, populating the real environment. The set of artefacts perceived by the real robot is A f = {a|count((Z A - Z S 3D ) ij 0) } Experiments Two different experiments have been settled, where ARE has been used to pop- ulate the real surroundings with artefacts, in order to check the robot ability to replan the path towards a goal location, as the fre- quency of the arrivals of the artefacts into the environment changes; test the long-term capability of the robot to navigate the cluttered environ- ment in order to reach several goal locations. Figure 6: Numerical results To measure the robot ability to replan the following time ratio is introduced ρ = ρ t ρ t + G t The long-term capability of the robot to navigate the cluttered environment has been measured with respect to the space complexity of the environment. This complexity is defined by the following space ratio: ν = n A n free Figure 7: The augmented reality view in ROS rviz Conclusions We proposed a framework to augment the robot real world that advances the state of the art, as it introduces, together with the augmented environment, also the robot perceptual model of the augmented environment and the possibility of tuning the degree of confidence and uncertainty of the robot on what it is pre- sented in the augmented scene. The framework has been used to test both the short-term and the long-term capability of the robot to navigate the real envi- ronment, populated by artefacts, as the complexity of the environment changed, according to the parameters of ARE. Acknowledgements The research has been funded by EU-FP7 NIFTI Project, Contract No. 247870. References [1]Ronald Azuma, Yohan Baillot, Reinhold Behringer, Steven Feiner, Simon Julier, and Blair MacIntyre. Recent advances in augmented reality. IEEE Comput. Graph. Appl., 21(6):34–47, 2001. [2]J. W. S. Chong, S. K. Ong, A. Y. C. Nee, and K. Youcef-Youmi. Robot programming using augmented reality: An interactive method for plan- ning collision-free paths. Robotics and Computer-Integrated Manufactor- ing, 25(3):689–701, 2009. [3] T.H.J. Collett and B.A. MacDonald. Augmented reality visualisation for player. In ICRA, pages 3954–3959, 2006. [4] Geoffrey Grimmett and Geoffrey Grimmett. Probability and Random Pro- cesses. Oxford University Press, Third Ed., 2001. [5]M. Quigley, K. Conley, B. Gerkey, J. Faust, T. Foote, J. Leibs, R. Wheeler, and A. Ng. Ros: an open-source robot operating system. In ICRA Workshop on Open Source Software, 2009. [6] Michael Stilman, Philipp Michel, Joel Chestnutt, Koichi Nishiwaki, Satoshi Kagami, and James Kuffner. Augmented reality for robot development and experimentation. Technical report, Robotics Institute, 2005. [7] Sebastian Thrun, Wolfram Burgard, and Dieter Fox. Probabilistic Robotics. MIT Press, 2005. [8] Kai M. Wurm, Armin Hornung, Maren Bennewitz, Cyrill Stachniss, and Wolfram Burgard. Octomap: A probabilistic, flexible, and compact 3d map representation for robotic systems. In Proc. ICRA Workshop on Best Prac- tice in 3D Perception and Modeling for Mobile Manipulation, 2010.

ARE: Augmented Reality Environment - Semantic Scholar · ARE: Augmented Reality Environment ... and A. Ng. Ros: ... Kagami, and James Kuffner. Augmented reality for robot development

Embed Size (px)

Citation preview

Page 1: ARE: Augmented Reality Environment - Semantic Scholar · ARE: Augmented Reality Environment ... and A. Ng. Ros: ... Kagami, and James Kuffner. Augmented reality for robot development

ARE: Augmented Reality Environmentfor Mobile RobotsMario Gianni & Federico Ferri & Fiora PirriALCOR Lab, DIAG, Sapienza University of Rome

Contact Information:ALCOR Lab, DIAGSapienza University of RomeVia Ariosto 25, 00185 Rome, ItalySite: www.dis.uniroma1.it/∼alcorPhone: +39 0677274155Email: [email protected]

AbstractARE is a development tool which allows cognitive robotics modelers to construct,

at real-time, complex planning scenarios for robots, eliminating the need to model thedynamics of both the robot and the real environment as it would be required by wholesimulation environments. The framework also builds a world model representation thatserves as ground truth for training and validating algorithms for vision, motion planningand control. The AR-based framework is applied to evaluate the capability of the robotto plan safe paths to goal locations in real outdoor scenarios, while the planning scenedynamically changes, being augmented by virtual objects.

IntroductionAugmented Reality (AR) has recently stepped beyond the usual scope of appli-cations like machine maintenance, military training and production [1].•AR facilitates robot programming [2].•AR allowing cognitive robotics developers to design a variety of complex

scenarios involving real and virtual components, in a flexible manner [6, 3].•AR eliminates the need to model the entire environment as it supplements the

real world instead of replacing it.•AR facilitate experiments bypassing complex simulation models, expensive

hardware setup and a highly controlled environment, in the various stages ofa cognitive robot development.

Figure 1: Degree of Complexity

ObjectivesIn this work, we present an AR-based simulation framework which allows robotdevelopers to build on-line an Augmented Reality Environment (ARE) for realrobots, integrated into the visualization interface of Robot Operating System(ROS) [5]. The system goes beyond an interface for drawing objects, as the de-sign exploits a stochastic model regulating both the life-cycle and the behaviorof the artefacts. Furthermore, a degree of certainty of the existence and behav-iors of the artefacts, with respect to what the robot perceives and knows aboutits space, can be tuned according to the experiment needs.

AR-based Simulation FrameworkThe AR-based simulation framework registers virtual objects, such as robots,cars, people, pallets and other kind of obstacles into the real environment.

Figure 2: Artefacts

The AR-based simulation framework includes:• The model of the real world.• The model of the Artefacts.• The AR-builder server.

The real world modelThe real world model comprises both the 2D occupancy grid mapM2D and theoctree-based 3D mapM3D [7, 8].

Figure 3: World Model

In addition a polygonal mesh SE is used to geometrically represent the environ-ment. This model is compact and better suited for•Updating.•Detecting collisions.•Handling occlusions.

The Artefacts modelAn artefact A is a dynamic object, defined by the following components:• The properties Q = {l, b,p(., t),q(., t),Φ}.• The life-cycle model {N(t)|t ≥ 0}.• The behavioral modelHA(t).• The polygonal mesh SA.

Figure 4: Artefacts components

Artefact properties are sampled in the space Q of all possible tuples, definingan artefact type, according to a mixture of n Poisson distributions:

Pr(X = k|λ1, . . . , λn) =

n∑i=1

πiλkik!

exp(−λi)

The life-cycle of the artefacts, namely the arrivals and the leavings of the arte-facts in M2D, is ruled by a time-homogeneous irreducible, continuous-timeMarkov chain {N(t)|t ≥ 0} with state space N+, whose stationary transitionprobabilities pij(∆t) between state i and state j, for infinitesimal lapses of time∆t, are given by

pij(∆t) := P(N(t + ∆t) = j|N(t) = i) =

=

λν(M2D)∆t + o(∆t) if j = i + 1

1− (λν(M2D) + µ)∆t + o(∆t) if j = i

µ∆t + o(∆t) if j = i− 1

o(∆t) if |j − i| > 1

{N(t)|t ≥ 0} can be seen as a special case of a birth-death process [4], withbirth rates λi = λν(M2D) and death rates µi = µ, for each i ∈ N+.The behavior of an artefact is determined by a finite-horizon Markov DecisionProcess

HA(t) = {D,S,As, pt(·|s, α), rt(·|s, α) : t ∈ D, s ∈ S, α ∈ As}

•D = {0, . . . , d} set of decision epochs.• S ≡M2D × {0, π2 , π,

32π, 2π} state space.

• As set of possible actions.• α : B × Π× S 7→ (k1, . . . , km)> behavior-motion mapping.HA(t) selects the motion actions up to the time horizon d, according to theunderlying action policy.

The AR-builder server

The AR-builder server interconnects the real environment model together withthe simulation model of the artefacts. The AR-builder server relies on the tf

software library in order to suitably register the artefacts into the real environ-ment. The AR-builder server correctly places an artefact within the real envi-ronment, by both projecting its bounding box b on M2D and concatenating thevertexes of the polygonal mesh SA to the voxels of M3D.

Figure 5: The augmented model of the real world

The AR-builder server implements a collision detection algorithm, based onpairwise hit-testing. Collisions are resolved by either moving back the artefactto its last known safe pose or by allowing the artefact to move up to a safe dis-tance. The AR-builder server checks occlusion effects by implementing a raytracing version of the z-buffer algorithm. The algorithm relies of the followingstructures:• The view model Fview = {P, (W,dmax)} of the real robot.• The occlusion matrix ZS3D = {dki,j|d

ki,j ∈ [0, dmax] ,∀i, j} of the implicit

surface of the polygonal mesh S3D of the real environment.• The occlusion matrix ZA of each artefactA, populating the real environment.

The set of artefacts perceived by the real robot is

Af = {a|count((ZA − ZS3D)ij ≤ 0) > τ}

ExperimentsTwo different experiments have been settled, where ARE has been used to pop-ulate the real surroundings with artefacts, in order to• check the robot ability to replan the path towards a goal location, as the fre-

quency of the arrivals of the artefacts into the environment changes;• test the long-term capability of the robot to navigate the cluttered environ-

ment in order to reach several goal locations.

Figure 6: Numerical results

To measure the robot ability to replan the following time ratio is introduced

ρ =ρt

ρt + Gt

The long-term capability of the robot to navigate the cluttered environment hasbeen measured with respect to the space complexity of the environment. Thiscomplexity is defined by the following space ratio:

ν =nAnfree

Figure 7: The augmented reality view in ROS rviz

ConclusionsWe proposed a framework to augment the robot real world that advances thestate of the art, as it introduces, together with the augmented environment, alsothe robot perceptual model of the augmented environment and the possibility oftuning the degree of confidence and uncertainty of the robot on what it is pre-sented in the augmented scene. The framework has been used to test both theshort-term and the long-term capability of the robot to navigate the real envi-ronment, populated by artefacts, as the complexity of the environment changed,according to the parameters of ARE.

AcknowledgementsThe research has been funded by EU-FP7 NIFTI Project, Contract No. 247870.

References[1] Ronald Azuma, Yohan Baillot, Reinhold Behringer, Steven Feiner, Simon

Julier, and Blair MacIntyre. Recent advances in augmented reality. IEEEComput. Graph. Appl., 21(6):34–47, 2001.

[2] J. W. S. Chong, S. K. Ong, A. Y. C. Nee, and K. Youcef-Youmi. Robotprogramming using augmented reality: An interactive method for plan-ning collision-free paths. Robotics and Computer-Integrated Manufactor-ing, 25(3):689–701, 2009.

[3] T.H.J. Collett and B.A. MacDonald. Augmented reality visualisation forplayer. In ICRA, pages 3954–3959, 2006.

[4] Geoffrey Grimmett and Geoffrey Grimmett. Probability and Random Pro-cesses. Oxford University Press, Third Ed., 2001.

[5] M. Quigley, K. Conley, B. Gerkey, J. Faust, T. Foote, J. Leibs, R. Wheeler,and A. Ng. Ros: an open-source robot operating system. In ICRA Workshopon Open Source Software, 2009.

[6] Michael Stilman, Philipp Michel, Joel Chestnutt, Koichi Nishiwaki, SatoshiKagami, and James Kuffner. Augmented reality for robot development andexperimentation. Technical report, Robotics Institute, 2005.

[7] Sebastian Thrun, Wolfram Burgard, and Dieter Fox. Probabilistic Robotics.MIT Press, 2005.

[8] Kai M. Wurm, Armin Hornung, Maren Bennewitz, Cyrill Stachniss, andWolfram Burgard. Octomap: A probabilistic, flexible, and compact 3d maprepresentation for robotic systems. In Proc. ICRA Workshop on Best Prac-tice in 3D Perception and Modeling for Mobile Manipulation, 2010.