View
2
Download
0
Category
Preview:
Citation preview
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 1 of 128
DELIVERABLE D5.3
System evaluation and test report
Contract number : 247772
Project acronym : SRS
Project title : Multi-Role Shadow Robotic System for Independent Living
Deliverable number : D5.3
Nature : R – Report
Dissemination level : PU – PUBLIC
Delivery date : 30-03-2013
Author(s) : Yuying Xia, Renxi Qiu
Partners contributed : ALL
Contact : Qiur@cf.ac.uk
SRS
Multi-Role Shadow Robotic System for Independent Living
Small or medium scale focused research project (STREP)
The SRS project was funded by the European Commission under the 7th Framework Programme (FP7) – Challenges 7: Independent living, inclusion and Governance Coordinator: Cardiff University
SRS Multi-Role Shadow Robotic System for Independent
Living Small or medium scale focused research project (STREP)
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 2 of 128
Document History:
Version Author(s) Date Changes
1 Yuying Xia 15 March 2013 Table of Content
2 Yuying Xia 30 March 2013 Main Contents
3 Renxi Qiu 15 April 2013 Formation Check
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 3 of 128
EXECUTIVE SUMMARY
SRS (Multi-Role Shadow Robotic System for Independent Living) project aims to develop the remotely-
controlled semi-autonomous robotic solutions to support elderly people in domestic environment. In
this study, the SRS prototype has been evaluated and tested in the system level.
For providing an overall evaluation of the SRS, a three phase evaluation has been carried out to indicate
the proof of the system performance. Phase 1 delivered the proof of technology, which include tests of
the robot demonstrator, general hardware/software, and software components. Phase 2 aims to prove
the system performance, by executing several on-site user tests and integration system simulation.
Phase 3 tries to prove the system usability based on user validations and further system integration tests.
First of all, the overall structure of the SRS is summarized in Section 2. The SRS system includes the Robot
demonstrators and several software components. Care-O-bot 3, which is a mobile manipulation robot
designed by Fraunhofer IPA, is used as a demonstrator within this project. The Care-O-bot 3 software has
been integrated with ROS, and supports everything from low-level device drivers to simulation inside of
Gazebo. The robot has two sides: a manipulation side and an interaction side. The manipulation side has
a SCHUNK Lightweight Arm 3 with SDH gripper for grasping objects in the environment. The interaction
side has a touchscreen tray that serves as both input and output. People can use the touchscreen to
select tasks, such as placing drink orders, and the tray can deliver objects to people, like their selected
beverage. The SRS also consists of several components working together and exchanging information
through a ROS infrastructure. The individual components and their functional characteristics are based
on the SRS system functional requirements. The core of the SRS system consists of the following main
components: Decision Making (DM), Local User Interface (UI_LOC), Private User Interface (UI_PRI),
Professional User Interface (UI_PRO), Human Sensing (HS), Environment Perception (EP), Grasping,
Object Detection (OD), General Household Object Database (GHOD), Semantic Knowledge Base (KB),
Learning, Mixed Reality Server (MRS) and Symbolic Grounding (SG).
Then the Proof of Technology indicates the approval of the technical performance and safety of SRS
system, both for the hardware and software components. The tests for robot demonstrator, general
hardware/software, and software components, which have been carried out in the test, are shown in
Section 3. All the mechanical parts and modules of Care-O-bot® 3, have been checked in order to provide
a functional system, which include: Light-Weight-Arm (7 DOF), Gripper (7 DOF), Tactile sensors, Loud
speaker, Computer bay (include 3-5 PCs), Omnidirectional platform (4*2 DOF), Sensor Head(1 DOF,
STEREO AND 3D-ToF), 2 PT-Units (2*2 DOF), Tray (1 DOF), Touch screen, Battery and 3 Laser scanners
(front, back, top of mobile base). The software components, which is directly relative to the mechanical
part, has been tested in this section as well, which include: cob extern, cob common, schunk modular
robotics, cob driver, cob command tools, cob robots and cob environments. Batteries & power supply,
and emergency stop have been tested for the system safety purpose. The robot running has been carried
out and proved to be functional working well. The general hardware and software configuration test
focused on the system setup, which include robot PCs setup, network configuration, etc. Each software
component has been tested in the aspects of structure, ROS API, object, service/task request and action,
etc. Some usage/examples and the software components installation/configuration have been tested at
the meantime, the test results are summarized in Appendices 1 and 2 attached below. The tests results
show that the system is stable and robust enough to observe the system running.
Followed by the Proof of Technology, the performance of the system has been evaluated by several user
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 4 of 128
tests or integration system simulation. After each user evaluation, the system was updated based on the
latest tests. The final system has been checked by the both user test in Stuttgart, Germany on February
2012, and the COB manipulation SRS grasping tests within SRS scenario. In the test, the milkbox was
placed in the ipa-kitchen on a table. The robot was supposed to grasp it and put it on the tray. The
successful rates were observed by two targets: one is the Robot could complete the task by itself, or user
intervention from UI_PRO is triggered; the other is the Robot completed the task itself without any help.
From the final test result we could draw the conclusion that the expected performance of SRS system
has been approved. The success rate of the task completion of the robots, which including both the
robot self-completion and the user intervention, is 100%, which indicated the designed system is highly
reliable. On the other hand, the successful rate of the task completion by the robot itself is 80%, which
also exceed the designing aim of the performance of this system. Due to the limitation of the robot task
performance and the occurrences of unexpected situations, occasionally the robot was unable to
complete its task successfully, which shows the importance of the semi-autonomous robotic solution.
The human intervention and guide will guarantee the robot to complete the task in case the failure of
the autonomous robotic behaviour.
Then the poof of usability is based on user validations and further system integration tests. The user
validations have been carried out three times on Jan 2012, San Sebastian, Spain; Feb 2012, Stuttgart,
Germany and May 2012, Milan, Italy, respectively. The on-site test participants include local uses (both
elder people and young disable person), private users and professional operators. After each on-site user
validation test, the suggestions for user interface were summarized. Based on the user test results and
suggestions, UI_LOC, UI_PRI, UI_BUT and UI_PRO have been updated and have been proved by
integration test and user validations, which were performed twice in Stuttgart, Germany on Nov 2012
and Feb 2013, respectively. The update and evaluation results show that all the updated UIs are proved
to working fine, which is a strong proof of the usability of SRS system.
During the user tests, the acceptance has also been analysed. In these tests, we have combined ad-hoc
questions with the AttrackDiff, selected to measure user experience in a simple and immediate manner.
We also collected the user perception of the robot and the system by oral questionnaire and
conversation. The test results show the high acceptance rate of the system. The majority people like the
concept and design of SRS, which include local user, private user and professional user groups. They are
all feel safe and peaceful in front of the robot. Although there are some issues need to be mentioned
here for the further development: the elder people are little bit concern about the learnability; the
private caregivers more concerns about the potential failure; and the professional operators are more
concerning the current state of development of other comparable products and projects. As in this
report, the main target is to prove the system performance in technology point of view, the summary of
the proof of acceptance is attached as Appendix 3.
To sum up, the system evaluation and tests are described in this report. For the evaluation and test
results we could draw the conclusion that the system is functionally working well for the purpose of
basic use by elderly people. The system is highly accepted and perceived interesting with the potentially
use. The user interface is friendly and has high learnability. However due to the limitation of the robot
task performance and the occurrences of unexpected situations, occasionally the robot was unable to
complete its task successfully, which shows the importance of the semi-autonomous robotic solution.
The human intervention and guide will guarantee the robot to complete the task in case the failure of
the autonomous robotic behaviour.
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 5 of 128
TABLE OF CONTENT
1 INTRODUCTION .................................................................................................................................... 7
2 OVERVIEW OF SRS SYSTEM STRUCTURE .............................................................................................. 7 2.1 ROBOT DEMONSTRATORS ............................................................................................................ 7 2.2 SRS COMPONENTS........................................................................................................................ 9
3 PROOF OF TECHNOLOGY .................................................................................................................... 11 3.1 HARDWARE-Mechanical Parts and Modules .............................................................................. 11
3.1.1 General hardware ................................................................................................................... 11 3.1.2 Batteries and power supply .................................................................................................... 11 3.1.3 Emergency stop ...................................................................................................................... 12 3.1.4 Running the robot .................................................................................................................. 12
3.2 GENERAL HARDWARE AND SOFTWARE CONFIGURATION ......................................................... 15 3.2.1 Setup robot pcs ...................................................................................................................... 15 3.2.2 Install ROS and driver software .............................................................................................. 19
3.3 SOFTWARE COMPONENTS ......................................................................................................... 24 3.3.1 Decision making (srs_decision_making) ............................................................................. 24 3.3.2 Mixed reality server (srs_mixed_reality_server) .................................................................... 28 3.3.3 Knowledge base (srs_knowledge) .......................................................................................... 30 3.3.4 Grasping (srs_grasping) .......................................................................................................... 34 3.3.5 Simple scenario (srs_scenarios) ............................................................................................. 36 3.3.6 Human sensing (srs_leg_detector) ......................................................................................... 36 3.3.7 Private User Interface (srs_ui_pri) .......................................................................................... 36 3.3.8 Local User Interface (srs_ui_loc) ............................................................................................ 36 3.3.9 Assisted arm navigation (srs_assisted_arm_navigation) ........................................................ 36 3.3.10 Interaction primitives (srs_interaction_primitives) ............................................................ 43 3.3.11 Environment model (srs_env_model) ................................................................................ 53 3.3.12 Environment perception (srs_env_model_percp) .............................................................. 59 3.3.13 Spacenav Teleop (cob_spacenav_teleop) ........................................................................... 61 3.3.14 Arm Navigation Tests (srs_arm_navigation_tests) ............................................................. 63 3.3.15 Assisted Arm Navigation (srs_assisted_arm_navigation) ................................................... 63 3.3.16 Assisted Arm Navigation UI (srs_assisted_arm_navigation_ui) .......................................... 71 3.3.17 Assisted Grasping (srs_assisted_grasping) ......................................................................... 75 3.3.18 Assisted Grasping UI (srs_assisted_grasping_ui) ................................................................ 77 3.3.19 Env Model (srs_env_model) ............................................................................................... 77 3.3.20 UI BUT (srs_ui_but)............................................................................................................. 84 3.3.21 User Test Package (srs_user_tests) ..................................................................................... 88
3.4 CONCLUSIONS ............................................................................................................................ 90
4 PROOF OF PERFORMANCE ................................................................................................................. 91 4.1 OBJECTIVE ................................................................................................................................... 91 4.2 TEST PROCEDURE ....................................................................................................................... 91 4.3 FIRST TEST AND ANALYSIS .......................................................................................................... 91 4.4 FINAL TEST .................................................................................................................................. 93 4.5 SUMMARY .................................................................................................................................. 93
5 PROOF OF USABILITY .......................................................................................................................... 94 5.1 INTRODUCTION .......................................................................................................................... 94 5.2 PROOF OF USABILITY FOR LOCAL USERS .................................................................................... 94 5.3 PROOF OF USABILITY FOR PRIVATE USERS ................................................................................. 95
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 6 of 128
5.4 PROOF OF USABILITY FOR PROFESSIONAL USERS ...................................................................... 99
6 CONCLUSIONS .................................................................................................................................. 102
7 REFERENCES ..................................................................................................................................... 103
8 APPENDIX 1-SOFTWARE CONPONENTS USAGE/EXAMPLES ............................................................. 103 8.1 Decision making (srs_decision_making) ................................................................................... 103 8.2 Knowledge base (srs_knowledge) ............................................................................................ 104 8.3 Grasping (srs_grasping) ............................................................................................................ 106 8.4 Simple scenario (srs_scenarios) ............................................................................................... 106 8.5 Human sensing (srs_leg_detector) ........................................................................................... 107 8.6 Private User Interface (srs_ui_pri) ............................................................................................ 107 8.7 Local User Interface (srs_ui_loc) .............................................................................................. 108 8.8 Assisted arm navigation (srs_assisted_arm_navigation) .......................................................... 108 8.9 Interaction primitives (srs_interaction_primitives) .................................................................. 109 8.10 Environment perception (srs_env_model_percp) .................................................................... 111 8.11 Spacenav Teleop (cob_spacenav_teleop) ................................................................................. 111 8.12 Arm Navigation Tests (srs_arm_navigation_tests) ................................................................... 112 8.13 Assisted Arm Navigation (srs_assisted_arm_navigation) ......................................................... 112 8.14 UI BUT (srs_ui_but) .................................................................................................................. 113
9 APPENDIX 2-SOFTWARE CONPONENTS INSTALLATION .................................................................... 115 9.1 Mixed reality server (srs_mixed_reality_server) ...................................................................... 115 9.2 Knowledge base (srs_knowledge) ............................................................................................ 116 9.3 Grasping (srs_grasping) ............................................................................................................ 116 9.4 Private User Interface (srs_ui_pri) ............................................................................................ 117 9.5 Local User Interface (srs_ui_loc) .............................................................................................. 117 9.6 Assisted arm navigation (srs_assisted_arm_navigation) .......................................................... 118 9.7 Interaction primitives (srs_interaction_primitives) .................................................................. 118 9.8 Environment model (srs_env_model) ...................................................................................... 118 9.9 Environment perception (srs_env_model_percp) .................................................................... 120 9.10 Arm Navigation Tests (srs_arm_navigation_tests) ................................................................... 121 9.11 Assisted Arm Navigation (srs_assisted_arm_navigation) ......................................................... 121 9.12 Env Model (srs_env_model) ..................................................................................................... 122 9.13 UI BUT (srs_ui_but) .................................................................................................................. 124
10 APPENDIX 3-PROOF OF ACCEPTANCE ............................................................................................... 125 10.1 Introduction .............................................................................................................................. 125 10.2 Tests results ............................................................................................................................. 125 10.3 Summary .................................................................................................................................. 127
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 7 of 128
1 INTRODUCTION
This document describes the system evaluation and test of Multi-Role Shadow Robotic System for Independent Living (SRS), which include the three stages of evaluations as follow:
First of all, the overall structure of the SRS is summarized in Section 2. Then the Proof of Technology indicates the approval of the technical performance and safety of the SRS system, both in the hardware and software components. The tests for robot demonstrator, general hardware/software, and software components, which have been carried out in the test, are shown in Section 3.
Followed by the Proof of Technology, the performance of the system has been evaluated by several user tests or integration system simulation. After each user evaluation, the system was updated based on the latest tests. The final system has been checked by the both user test in Stuttgart, Germany on February 2012, and the COB manipulation SRS grasping tests within SRS scenario.
Then the poof of usability is based on user validations and further system integration tests. The user validations have been carried out three times on Jan 2012, San Sebastian, Spain; Feb 2012, Stuttgart, Germany and May 2012, Milan, Italy, respectively. After each on-site user validation test, the suggestions for user interface were summarized. Then UIs have been updated and have been proved by integration test and user validations, which were performed twice in Stuttgart, Germany on Nov 2012 and Feb 2013, respectively. During the user tests, the acceptance has also been analysed.
This report is complemented by a conclusion giving indications for further development.
2 OVERVIEW OF SRS SYSTEM STRUCTURE
The SRS is a remotely-controlled, semi-autonomous robotic system in domestic environments for personalized home care. The SRS system includes the Robot demonstrators and several components as follow:
2.1 ROBOT DEMONSTRATORS
Care-O-bot 3 which is shown in Figure 1 is used as a demonstrator within this project.
FIGURE 1 CARE-O-BOT 3
The Care-O-bot 3 is a mobile manipulation robot designed by Fraunhofer IPA which is available both as a commercial robotic butler, as well as a platform for research. The Care-O-bot software has been integrated with ROS, and supports everything from low-level device drivers to simulation inside of Gazebo.
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 8 of 128
The robot has two sides: a manipulation side and an interaction side. The manipulation side has a SCHUNK Lightweight Arm 3 with SDH gripper for grasping objects in the environment. The interaction side has a touchscreen tray that serves as both input and "output". People can use the touchscreen to select tasks, such as placing drink orders, and the tray can deliver objects to people, like their selected beverage.
The structure of Care-O-bot 3, is shown in Error! Reference source not found. below:
FIGURE 2 STRUCTURE OF CARE-O-BOT® 3
The technical data of Care-O-Bot 3 is listed in Table 1 and the software overview listed in Table 2 below. For sensing, the Care-O-bot 3 uses two SICK S300 laser scanners, a Hokuyu URG-04LX laser scanner, two Pike F-145 firewire cameras for stereo, and Swissranger SR3000/SR4000s. The cob_driver stack provides ROS software integration for these sensors.
The Care-O-bot runs on a CAN interface with a SCHUNK LWA3 arm, SDH gripper, and a tray mounted on a PRL 100 for interacting with its environment. It also has a SCHUNK PW 90 and PW 70 pan/tilt units, which give it the ability to bow through its foam outer shell. The CAN interface is supported through several Care-O-bot ROS packages, including cob_generic_can and cob_canopen_motor, as well as wrappers for libntcanand libpcan. The SCHUNK components are also supported by various packages in the cob_driver stack.
TABLE 1 TECHNICAL DATA OF CARE-O-BOT
Dimensions (L/W/H) 75/55/145 cm
Weight 180 kg
Power supply Gaia rechargeable Li ion battery 60 Ah, 48 V Internal: 48 V, 12 V, 5 V separate power supplies to motors and controllers All motors connected to emergency-stop circuit
Omnidirectional platform Neobotix MOR including 8 motors (2 motors per wheel: 1 for rotation axis, 1 for drive) Elmo controllers (CAN interface) 2 SICK S300 laser scanners 1 Hokuyu URG-04LX laser scanner Speed: up to 1.5 m/s
Arm Schunk LWA 3 (extended to 120 cm) CAN interface (1000 kbaud) Payload: 3 kg
Gripper Schunk SDH with tactile sensor
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 9 of 128
CAN interfaces for tactile sensors and fingers
Torso 1 Schunk PW 90 pan/tilt unit 1 Schunk PW 70 pan/tilt unit 1 Nanotec DB42M axis Elmo controller (CAN interface)
Sensor head 2 AVT Pike 145 C, 1394b, 1330×1038 (stereo circuit) MESA Swissranger 4000 or Microsoft Kinect
Tray 1 Schunk PRL 100 axis LCD display Touch screen
Processor architecture 3 PCs (2 GHz Pentium M, 1 GB RAM, 40 GB HDD)
TABLE 2 SOFTWARE OVERVIEW OF CARE-O-BOT 3
cob extern The cob extern stack contains third party libraries needed for operating Care-O-bot.
cob common: The cob common stack hosts common packages that are used within the Care-O-bot repository. Also the urdf description of the robot, which is the kinematic and dynamic model of the robot, 3D models of robot components, information required for gazebo to simulate the COB and utility packages or common message and service de_nitions.
schunk modular robotics
This repository includes drivers and models for Schunk products, like powercubes or sdh.
cob driver The cob driver stack includes packages that provide access to the Care-O-bot hardware through ROS messages, services and actions. E.g. for mobile base, arm, camera sensors, laser scanners, etc.
cob command tools: This stack provides the source code of the tools that you need to command the robot: cob command gui, cob dashboard, cob script server and cob teleop.
cob robots: The cob robots stack collects Care-O-bot components that are used in bringing up a robot. The user's interface to the cob robots stack is cob bringup.
cob environments: This stack provides the parameters for default envi-ronment con_gurations.
2.2 SRS COMPONENTS
The SRS consists of several components working together and exchanging information through a ROS infrastructure. The individual components and their functional characteristics are based on the SRS system functional requirements. The core of the SRS system consists of the following main components:
Decision Making (DM) SRS decision making is the centre of the SRS control structure. In order to optimise the decision making process, SRS actions are divided into three layers:
1. Primitive action level: They are basic robot actions developed by other SRS component or imported from care-o-bot. The actions in this level are considered as the generic states in the SRS decision making.
2. Decision level: They are the action schemas which fulfil certain skills. The actions in this level are considered as the state machines in the SRS decision making.
3. Task level: They are the basis of SRS high level commands which can be issued by UI for controlling the robot behaviours. SRS application scenarios are constructed by the commands on this level.
The concept of DM is shown in Figure 3 In the decision making module, Level 1 and level 2 are
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 10 of 128
connected by a collection of SRS high level state machines within SRS action server. The idea is to connect inputs, outputs and robot configurations of primitive actions together automatically as if the cerebellum of the human brain. Level 2 and level 3 are connected by SRS knowledge service.
FIGURE 3 CONCEPT OF DECISION MAKING (DM)
Local User Interface (UI_LOC) Local user interface allows the local user to initiate a number of commands to the robot, e.g. “Bring me water”. This module is developed by IMA.
Private User Interface (UI_PRI) The private user interface allows a non-professional remote operator, e.g. extended family members or caregiver, to operate to the robot remotely. This interface is able to visualize a real time video stream from the on-board cameras of the robot. It also allows high-level control of the robot and manual intervention when the autonomous mode of execution fails to accomplish the task. The module is developed by ISER-BAS.
Professional user interface (UI_PRO) The professional user interface allows full remote control, including low level remote control. It is designed to be used by the professional remote operator service to control the SRS system when the extended family members or care-givers are not available or are unable to deal with the control of the robot. The module is developed by ROB.
Human Sensing (HS) SRS human sensing uses the laser range finders of the Care-O-bot to locate a person arond the robot. The location of the human is visualised on a room map displayed on the UI_PRI and UI_PRO interfaces. The main aim is to increase the awareness of the remote operator (RO) about the local environment in which the robot operates. The module is developed by CU. The location of the human is visualised on a room map displayed on the UI_PRI and UI_PRO interfaces. The main aim is to increase the awareness of the remote operator (RO) about the local environment in which the robot operates. The module is developed by CU.
Environment Perception (EP) ROS environment perception package is provided by dcgm-robotics@FIT group. This package provides several utilities for environment perception and modelling from RGB-D sensor data, e.g. Kinect device. This information is used in planning the navigation and actions of the robot.
Grasping This package provides tools to generate and simulate different grasping solutions in the Care-o-Bot. This module uses information from the environment perception module, the general household object database and the Knowledge Base (KB) to calculate the best grasping points.
Object Detection (OD) This module detects and identifies previously learned objects. The information from the detection is used in grasping and later stored in the General Household Object Database for future use, e.g. for faster searching for this object. The module is developed by IPA and Profactor.
General Household Object Database (GHOD)
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 11 of 128
This module stores information about known objects in the SRS system, including geometric shape, typical pose, appearance (image). The module is developed by HPIS.
Semantic Knowledge Base (KB) Semantic Knowledge Base is a package handling the task planning at the symbolic level in the SRS project. It stores information identifying content by type and meaning via descriptive metadata. The module is developed by CU.
Learning This module consists of a number of self-learning services (SLS) that evolve behaviour aspects of COB using recorded data from its operation. This module is developed by BED.
Mixed Reality Server (MRS) The Mixed Reality Server is a component of the SRS Project for elderly care robot named Care-O-Bot. It provides augmented reality stream for the UI_PRI user interface by using the information from the map server, Household database and SRS Knowledge Database.
Symbolic Grounding (SG) This component “translates” symbolic terms such as “near” and “region” contained in high-level commands into the destination positions used in the low-level commands. This module is developed by BED.
3 PROOF OF TECHNOLOGY
3.1 HARDWARE-MECHANICAL PARTS AND MODULES
3.1.1 GENERAL HARDWARE All the mechanical parts and modules of Care-O-bot® 3, have been checked in order to provide a functional system, which include:
Light-Weight-Arm (7 DOF)
Gripper (7 DOF)
Tactile sensors
Loud speaker
Computer bay (include 3-5 PCs)
Omnidirectional platform (4*2 DOF)
Sensor Head(1 DOF, STEREO AND 3D-ToF)
2 PT-Units (2*2 DOF)
Tray (1 DOF)
Touch screen
Battery
3 Laser scanners (front, back, top of mobile base)
The software components, which is directly relative to the mechanical part, has been tested in this section as well, which include:
cob extern
cob common
schunk modular robotics
cob driver
cob command tools
cob robots
cob environments
3.1.2 BATTERIES AND POWER SUPPLY Care-O-bot is powered by a 48V battery, which can be a Gaia rechargeable Li-ion battery (60 Ah 48V) or a plumb battery. The batteries can be charged with a maximum of 56V at 10A. The robot can be plugged in
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 12 of 128
during operation. 3.1.3 EMERGENCY STOP As not all robot movements are safe for a user, the robot itself and the environment, the emergency stop in situations which are not foreseen by the user should be activatied. There are three ways of stopping the robot: On the robot there are two red buttons on the laterals, user can step into the safety fields of the Sick S300 scanners or you use the wireless emergency stop. The robot components with the command gui after an emergency stop was activated should be recovered; the status of the components using the dashboard could be checked. Emergency stop buttons Press the red buttons on the left and right side of the torso. To release the emergency stop, turn the buttons so that they come out again. After that turn the key to position II until user hear a “click”. Safety field from the Sick S300 scanners If user step into the safety fields of the laser scanners the emergency stop is activated automatically. After the safety fields are free from any obstacle again the emergency stop is released on its own after a few seconds, as shown in Figure 4 below.
FIGURE 4 SAFETY FIELDS OF THE LASER SCANNERS
Remote emergency stop control User can press the red button to stop the robot. To release the emergency stop user have to lift the red button and afterwards press the green button. The remote emergency stop control can be disabled using the switch next to the key switch. 3.1.4 RUNNING THE ROBOT First the user has to connect the power supply to the robot or the user can use the battery pressing the battery button on the base. To switch the robot on the user has to use the key. If the user moves it to position II and hold for a few seconds the robot will turn on. To turn of the robot turn the key to position I. After starting up the robot the emergency stop circuit is still activated. To enable power for the motors, keep the safety area of the Sick S300 scanners free of any obstacle and release the emergency buttons. In the case that the wireless emergency stop is active, release it by releasing the red button and pressing the green button afterwards. The user should hear a “click” as soon as the emergency stop is released.
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 13 of 128
For safety reasons if the robot is not supervised any more, e.g. during a break, the emergency stop has to be activated. Logging in to the robot pcs For logging in with a remote PC to the robot pcs the user has to have an account on the robot. The user uses ssh to login (in this example to pc1 of cob3-3) ssh −X user name@cob3−3−pc1
Bringup the robot The first step to bringup the robot is to start a roscore, this is necessary to have communication between the nodes. The user can run it using roscore.
If the user wants to run the robot, the user has a launch file for launching all the components of the robot. It is located in the package cob bringup.
roslaunch cob_bringup robot.launch
Now all drivers and core components should be started so the user can continue and have a look at the robot status in the dashboard or moving the robot using joystick or command gui.
Using dashboard and command gui To know the state of all the components of the robot the user can use the dashboard tool. To move the robot the user can use the command gui. The user can start both with
roslaunch cob_bringup dashboard.launch
After launching this file the user will see two GUIs on the screen: the smaller one is the dashboard, which gives the user information about the current status of the robot and its components as well as the emergency stop status. The bigger one is called command gui and offers a wide range of buttons to move the robot to predefined positions using low level control commands. It also offers buttons for initialising and recovering the actuators. The emergency stop status is visible in the upper left corner of the command gui. Before initialising or moving the robot, the user could check that the status is OK.
cob dashboard
The dashboard is an important tool where the user can check the state of the robot. If clicks the first button, the user will see a new window popping up with three levels: Errors, Warnings and All. There the user can see the state of each component at any time. The status monitoring is divided into Actuators, Sensors and other. The other buttons are for showing diagnostics, motors, emergency status and battery state. In the case of the Care-O-bot 3 the buttons of the Motors have been disabled.
cob command gui The command gui can be used for sending low level movement commands to the robot components. The standard view of the command gui is shown in Figure 5 below:
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 14 of 128
FIGURE 5 STANDARD VIEW OF THE COB COMMAND GUI
In this screen-shot there are different columns: general, base, torso, tray, arm settings, arm pos, arm traj, sdh and eyes. The first column is very important when robot runs. The other columns of the components have different predefined positions where you can move to. Additionally each component has a stop, init and recover button, to stop, initialize or recover a single component. Rviz RVIZ is a tool that visualizes data from the robot, e.g. the sensor data from the laser scanners, but also information about the coordinate systems and transformations or the images from the cameras. RVIZ needs to be started on the local machine. To be able to visualize topics from the robot export the ROS MASTER URI to the robot export ROS MASTER URI=http : / / cob3−X−pc1 :11311 rosrun rviz rviz To use a predefined configuration for Care-O-bot start rviz with export ROS MASTER URI=http : / / cob3−X−pc1 :11311 roslaunch cob bringup rviz.launch Joystick To be able to use the joystick, initialize the components using the command gui. For moving the robot components, the dead-man button has to be pressed all the time, as soon as the button is released all hardware components will be stopped immediately. • For moving the base: Hold the dead-man button and use the base rotation and translation axis to move the base. • For moving the torso: Hold the dead-man button and the upper or lower neck button, then use the up down or left right axis to move the torso. • For moving the tray: Hold the dead-man button and the tray button, then use the up down axis to move the tray. • For moving the arm: Hold the dead-man button and one of the arm buttons, then use the up down or left right axis to move the selected arm joints.
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 15 of 128
Figure 6 below shows the buttoms command with components. Have a look at the following image to see which buttons command which components.
FIGURE 6 BUTTOMS COMMAND WITH COMPONENTS
3.2 GENERAL HARDWARE AND SOFTWARE CONFIGURATION
The list below sums up a set of configurations and scripts which have to be set to run the system property. The tests in this section aim to identify whether all these configurations have been done in advance or are set automatically when the system boots up.
Operation system setup
Network configuration
Touch screen
Server
Up-to date version of repository compiled
Starting up SRS on boot up
3.2.1 SETUP ROBOT PCS On all Care-O-bots there are at least two pcs. Some Care-O-bots have an optional third pc, which is not covered by this manual. Within this section the setting up new pcs has been checked. To pc1 all actuators are connected, sensors are connected both, to pc1 and pc2. All camera sensors are connected to pc2, whereas all other sensors like e.g. laser scanners are connected to pc1. By default pc3 is not connected to any hardware and therefore can be used as additional computing power. Install operating system The first step is to install the operating system for each pc, which means pc1 and pc2 (optionally pc3). Ubuntu is used as the main operating system for the robot. The Ubuntu 10.4 LTS (long term stable) 64-bit version is recommended to install because this version is well tested to work with the hardware. Ubuntu (English version) should be installed creating a normal swap partition. The robot should be choosen as an admin account with a really safe password which
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 16 of 128
should only be known to the local robot administrator. The hostname of the pc should be cob3-X-pc1 and cob3-X-pc2. Install basic tools Next the user has to install some basic tools for the further setup of the pcs. In order to install the packages a internet connection is needed. sudo apt−get update sudo apt−get install vim tree gitg meld curl openjdk−6−jdk zsh terminator Setup ssh server Install openssh server on all robot pcs: sudo apt−get update sudo apt−get install openssh−server Let the server send an alive interval to clients to not get a broken pipe. Execute the following line on all robot pcs: echo ” ClientAliveInterval l 60” | sudo tee-a /etc/ssh/sshdconfig Setup robot account for administration tasks To facilitate the further setup a setup repository with some helpful scripts is created. To checkout the setup repository use:
Enable passwordless login to all robot pcs for robot user
Allow robot user to execute sudo command without password. Add robot ALL=(ALL) NOPASSWD: ALL to /etc/sudoers on all robot pcs
Setup root account for administration tasks Enable root account on all robot pcs
Enable passwordless login to all robot pcs for root user
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 17 of 128
Setup internal robot network Inside the robot there’s a router which connects the pcs and acts as gateway to the building network. Setup the router with the following configuration. The ip address of the router should be 192.168.IP.1 and for the internal network dhcp should be activated. Use cob3-X as hostname for the router. Register the MAC addresses of pc1 and pc2 so that they get a fixed ip address over dhcp. Use 192.168.IP.101 for pc1 and 192.168.IP.102 for pc2. Enable portforwarding for port 2201 to 192.168.IP.101 and for port 2202 to 192.168.IP.102, where IP parameter is defined depending on the robot. After ensuring that the network configuration of the router is setup correctly, the user can configure the pcs. All pcs should have two ethernet ports. The upper one should be connected to the internal router. Sometimes the graphical network manager causes troubles, so it is best to remove it
Afte removing the network manager we will have to edit /etc/network/interfaces manually, the user can do it copying the following lines on the pc’s. Network configuration on pc1
Network configuration on pc2
Install NFS After the network is configured properly the user can setup a NFS between the robot pcs. pc2 will act as
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 18 of 128
the NFS server and pc1 as NFS client. In order to protect the pc’s access, it is recomended create a local administrator user (root user 3.1.5) in each pc, in this case if there is a problem in the sever (pc2) or in the network this local user can access. NFS configuration on pc2 (server) Install the NFS server package and create the NFS directory
Add the following line to /etc/fstab:
Now the user can mount the drive
Activate IDMAPD in /etc/default/nfs-common by changing the NEED IDMAPD to yes
Add the following line to /etc/exports:
Change the home directory of the robot user from /home/robot to /u/robot in the /etc/passwd file. After finishing reboot the pc is required:
NFS configuration on pc1 (client) Install the NFS client package and create the NFS directory
Activate IDMAPD in /etc/default/nfs-common by changing the NEED IDMAPD to yes
Edit /etc/auto.master and add
Create a new file /etc/auto.direct with the following line, IP is the parameterthat define your robot
network
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 19 of 128
Activate the NFS
Change the home directory of the robot user from /home/robot to /u/robot in the /etc/passwd file. After finishing the pc needs to reboot.
Setup NTP time synchronitation Install the ntp package
3.2.2 INSTALL ROS AND DRIVER SOFTWARE Setup bash environment A special bash environment is setup to be used on the Care-O-bot pcs. The environments differ on each pc. Copy the cob.bash.bashrc.pcY to /etc/cob.bash.bashrc on each pc.
All users have a pre-configured bash environment too, therefore copy user.bashrc to ∼/.bashrc
The .bashrc file is preconfigured for cob3-3 and ipa-apartment, please change the following lines to fit the robot configuration. At the bottom of .bashrc the user has to define ROS MASTER URI to be http://cob3-X-pc1:11311, ROBOT to be cob3-X and ROBOT ENV to point to the environment.
If the user logout and login again or source ∼/.bashrc, the user should see different terminal colors for each pc and the ROS PACKAGE PATH should be configured. Create overlays for stacks If the release versions of the stacks are not working, the user can install overlays for individual stacks on the robot user account. To facilitate this process we have created a script which automatically generates ssh-keys, uploads them to github, forkes the stack (if necessary) and clones it to user’s machine. The user can either install a read-only version from our main fork (ipa320) or insert his own username and password to fork and clone his own version of the stack. The script can be used by simply typing
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 20 of 128
All stacks needed for the bringup layer are listed at section 2.2. Note: It should typically only be necessary to create an overlay for three stacks, cob robots, cob calibration data and cob environments. All other stacks should be used from their release version. Install ROS and additional tools Install python tools
Setup your sources.list Ubuntu 10.04 (Lucid)
Ubuntu 10.10 (Maverick)
Ubuntu 11.04 (Natty)
Set up the keys
Install ROS
If checks
The user should end up in /u/robot/git/care-o-bot/cob robots/cob bringup.
Build the bringup level packages Before setup the components the user can build all the bringup level packages:
Setup hardware components In order to use the different hardware components, the drivers should be installed and set permission rights. All hardware configuration is stored in the cob hardware config package. Setup udev rules In order
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 21 of 128
to have fixed device names we setup udev rules for Care-O-bot. Copy the udev rules from the setup repository to /etc/udev/rules.d on pc1 and pc2:
To activate these changes the user has to restart the system. Setup can bus drivers on pc1 This step is necessary for all drivers with can bus interface (Schunk powercubes, Schunk sdh, head axis, base). In general both can drivers from Peak Systems libpcan and ESD libntcan can be used.
Sick S300 laser scanners The Sick S300 scanners on the front side and backside of the robot are connected via USB to pc1. Configuration is done in the cob hardware config package in config/laser front.yaml and config/laser rear.yaml. To receive data from the Sick S300 scanners check if the user is in the dialout group
For testing the user can run the front laser scanner with
To check if there is some data published use
Check the rear scanner in the same way, the launch file is laser rear.launch and the topic /scan rear. The users have to configurate the scanners and setup the safety region for each scanner. It can be done using the Sick CDS software. The following configuration is recomended: • System parameters:
– Aplication name: COB3 – Device name S300[H]: S300-V – Device name S300[G]: S300-H – Name of the user: IPA
• Resolution/scanning range: – Aplication: Mobile – Resolution: 30 mm
• Restart: – Time delayed by: 2 Seconds
• Field sets:
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 22 of 128
The safety region should have 4 points: – x=15 cm y=12 cm – x=-15 cm y=12 cm – x≈15 cm y≈2.5 cm (depend on the cover) – x≈-15 cm y≈2.5 cm (depend on the cover)
• Measure data output: – Baud rate: 500 kBaud – Send mode: Continuous data output – Measured data aoutput: Distance – Beginning: -45 – End: 225
Hokuyo URG laser scanner The Hokuyo laser scanner is connected to PC1 via USB. The configuration of this device is defined in the launch file as parameters cob bringup/components/laser top.launch. For testing the scanner the user has to call the run the node:
The user can check the output of the topic:
Relayboard The Relayboard is connected to PC1 via USB. The configuration file (relayboard.yaml) is in the package cob hardware config inside the folder cob3-X/config. For testing the Relayboard the user has to launch the file:
And checking the output of the topic /emergency stop state proves its correct operation:
Base The elmo controllers should be configured; the parameters of this configuration in cob hardware config package in cob3-X/config/base.
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 23 of 128
For testing the user can launch the base with:
Init the component using the service:
And try to move the base using the joystick. Tray sensors The tray sensors are defined in udev rules and the library libphid-gets for this component is already built in the bringup level. Schunk SDH with tactile sensors. There is only one specific Firmware and library version supported. Add the robot user to the group dialout in the /etc/group file. The user can launch the SDH with the following file:
Schunk powercubes (lwa, torso, tray) There are two parts where configuration needs to be set. One part of the configuration is done inside the firmware of the powercubes and the other one is done through ROS parameters. First make sure that the powercube settings in the firmware are correct. The user can see and modify the parameters using the windows based PowerCubeCtrl softwarefrom the schunk homepage. The ROS configuration is done in cob hardware config, e.g. in cob3-X/config/lwa.yaml or /config/torso.yaml. There is a launch file in cob bringup per schunk component for testing:
Head axis The configuration is in the package cob hardware config in cob3-X/config/head.yaml. For testing the head axis the user can launch head solo.launch:
Prosilica cameras The IP address for the cameras should be 192.168.21.101 for the right camera and 192.168.21.102 for the left camera. Kinect The kinect is connected to pc2. For testing the user has to launch the kinect driver on pc2. On pc1:
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 24 of 128
On pc2:
The user can check the image in a remote pc:
And the point cloud checking the topic:
3.3 SOFTWARE COMPONENTS
3.3.1 DECISION MAKING (SRS_DECISION_MAKING)
Primitive actions and generic state
Primitive actions are atomic element in SRS decision making. A list of generic state and their corresponding inputs and outputs are listed in the following table
Name of the generic state Corresponding SRS/COB components
Inputs Outputs Outcomes
approach_pose() COB navigation
Target 2D Pose
'succeeded', 'failed'
approach_pose_without_retry()
COB navigation
Target 2D Pose
'succeeded', 'failed'
grasp() SRS assisted grasp+COB manipulation
Target object ID + Optional: Grasp configuration categorisation (top, side etc.)
Target object pose before grasp
'succeeded' 'retry' 'no_more_retry' 'failed'
detect_object() SRS assisted detection
List of object IDs Optional: Bounding _box
List of identified pose and corresponding ID
'succeeded' 'retry' 'no_more_retry' 'failed'
Environment_update() SRS environment perception
List of object IDs Optional: Bounding _region
List of identified pose and corresponding ID
'succeeded' 'retryno_more_retry' 'failed'
put_object_on_tray() COB manipulation
'succeeded', 'failed'
Placing_object() COB Target 3D 'succeeded', 'failed'
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 25 of 128
manipulation coordinate
Open_door() Door_ID 'succeeded' 'retry' 'no_more_retry' 'failed'
A template of the generic state is listed below:
class name_of_the_state(smach.State):
def __init__(self):
smach.State.__init__(
self,
outcomes=['outcome1', 'outcome2', 'etc'],
input_keys=['input1', "input2","etc"], # a key can be both input_key and
output_key
output_keys=['output1', "output2","etc"] # input_key and output_key can be empty
)
#initialise internal memory
#self.something=""
def execute(self, userdata):
#initialise all output keys, other wise higher state machine may raise errors
userdata.output1=""
userdata.output2=""
userdata.etc=""
#initialise all output keys completed
#checking robot configuration before action (optional)
#
#checking robot configuration before action finished
#do your programme
#you can refer the input_key or output_key by userdata.name_of_the_key
#checking robot configuration after action (optional)
#
#checking robot configuration after action completed (optional)
#return one of the outcome defined above
return 'outcome1'
Required message formation: To be added
High level state machines
In SRS implementation, action schemas are realised by ROS smash as robot control state machines and integrated with hardware platform with the SRS generic states defined above.
SRS high level state machines are listed in the following table:
Actions Required inputs Outputs Outcomes
SM_Navigation Target 2d pose Operation status(no object, or with object in SDH, or with object on tray)
'Completed' 'Not completed' 'Failed'
SM_Detection(for detecting only) List of Target ID, Optional: Bounding _region_3D
List of identified pose and corresponding ID
'Completed' 'Not completed' 'Failed'
SM_Environment_update List of Target ID, Optional: Bounding _region_2D
List of identified pose and corresponding ID
'Completed' 'Not completed' 'Failed'
SM_Simple_Grasp (Detection for Target object ID Last pose of the 'Completed'
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 26 of 128
grasping + grasping after object detected)
object before grasp 'Not completed' 'Failed'
SM_Grasp (sm_simple_grasp + open door)
Target object ID. Door ID Last pose of the object before grasp
'Completed' 'Not completed' 'Failed'
SM_After_Grasp (Placing object on tray etc.)
Object configuration (on tray or in SDH)
'Completed' 'Not completed' 'Failed'
SM_Placing (Placing object at delivery location)
Target object ID (The id of the target workspace)
'Completed' 'Not completed' 'Failed'
SRS high level commands
SRS high level commands are normally issued by users. As part of user interaction process, it needs to be a close loop e.g. starts from the idle state and also ends at the idle state.
High level commands (and there corresponding parameters) required by SRS fetch and carry scenario are list in the following table:
action parameters priority
move Target
Example: "move ChargingStation0"
search Target object name + Search_area (optional)
Example: "search Milkbox Table0 Table1"
get Target object name + Search_area (optional)
Example: "get Milkbox Table0 Table1"
fetch Target object name + Order_position + Search_area (optional)
Example: "fetch Book0 order Bookshelf0 Table0"
deliver Target object name + Target deliver position + Search_area(optional)
Example: "deliver Milkbox KitchenCounter0 Table0 Table1"
stop
pause
resume
Note1: Compared to other high level commands, stop command does not start from idle. The actual behaviour is depended on the place where the command is issued. E.g., the stop command issued before object has been grasped won’t be same as it is issued after the object has been grasped. SRS decision making will provide optimised policy accordingly by analysing the circumstance and context in real time.
Note2: Commands above can be reorganised in hierarchy for more complicated task such as setting table. They will be expanded in the further SRS development.
Note3: The commands "stop", "pause" and "resume" do not take any parameters. Parameters are separated by a blank character. The optional parameter "Search_area" is always at the end and may consist of several places or none at all.
Note4: "Book0" and "Milkbox" are object IDs. "Table0", "Table1", "KitchenCounter0" and "Bookshelf0" are workspaces. "order" and "ChargingStation0" are predefined poses.
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 27 of 128
Action Server
At the moment, action server takes high level actions: Move, Search, Get, Stop, Pause, Resume. This list will be further expanded with the project development.
SRS Action server follows ROS actionlib formation.
Based on the ros actionlib, SRS action server publishes the following topics:
/srs_decision_making_actions/cancel
/srs_decision_making_actions/feedback
/srs_decision_making_actions/goal
/srs_decision_making_actions/result
/srs_decision_making_actions/status
ROS API srs_decision_making_actions/goal string action
string[] parameters
uint32 priority
string client_id
string client_type
high level action name, parameters, priority, client id and client type.
srs_decision_making_actions/result uint32 return_value
#3 succeeded, 4 failed, 2 preemptied
high level action name, parameter and priority
srs_decision_making_actions/feedback string current_state
bool solution_required
uint32 exceptional_case_id
# 1 intervention for base pose needed
# 2 intervention for key region needed
# 3 intervention for grasp catergorisation needed
# 4 intervention for action sequences needed
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 28 of 128
3.3.2 MIXED REALITY SERVER (SRS_MIXED_REALITY_SERVER)
System architecture
The Mixed reality server consists of 5 nodes:
MRS
This node streams as a standard MJPEG/HTTP videostream the map information and augmented reality – household objects such a furniture and graspable objects. It also provides the functionality to stream selected ROS image topics.
The MRS node subscribes to /control_mrs topic which feeds object information for the augmented map drawing. The topic is published by the ControlMRS node.
ControlMRS
This node generates the combined information for the augmented reality using service calls to the SRS Household Database (info about the object size, grasping possibility and icon) and SRS Knowledge Database (info about name, class and pose).
The published topic (/control_mrs) structure is:
string command # Can be ADD or REMOVE (add or remove object from augmented map)
uint32 virtual_obj_number # the unique number of the object on the virtual map
uint32 virtual_obj_count # total number of the objects drawn on current frame of the virtual map
string topic # name of the image topic which will be augmented
int32 id # ID of the object in HH DB
string label # Text to label the drawn object
int32 type # Possible types:
# 1 - CONTOUR_RECTANGLE; (only for augmentation level 2)
# 2 - CONTOUR_ELIPSE; (only for </nowiki>augmentation level 2)
# 3 - USE IMAGE (only for augmentation level 3)
float32 x # Center of the drawn object on x axis
float32 y # Center of the drawn object on y axis
float32 width # Width of the object bounding box
float32 height # Height of the object bounding box
float32 angle # Rotation of the object in Degrees
string clr
# clr - Color of the object representation shape in HTML color – for exmple #FF0000 red
# (valid only for augmentation level 2)
sensor_msgs/Image image # topview image icon for the object (only for augmentation level 3)
RealityPublusher
This node provide information to the UI about the objects located on the augmented map.
The published topic - /map_active_areas consists of ros header and array of data which conatains
all the objects presented on the virtual map.
Header header
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 29 of 128
uint32[] object_id # IDs of the object in Household Database
string[] object_name # Object names – eg. Table, Fridge and etc.
string[] object_base_shape # Object base shapes (currently rectangle or ellipse)
uint32[] object_type # Object types such as furniture, unknown.
uint32[] x1 # Top left corners of object bounding box (x axis) in pixel coordinates
uint32[] y1 # Top left corners of object bounding box (y axis) in pixel coordinates
uint32[] x2 # Bottom right corners of object bounding box (x axis) in pixels
uint32[] y2 # Bottom right corners of object bounding box (y axis) in pixels
uint32[] angle # Bounding box rotation angles in degrees
bool[] graspable # Grasping possibility
MapTF
This node converts the coordinates, and sizes from ROS metric to pixel coordinates on virtual map and vice versa. It is provided as a ROS service srs_mixed_reality_server/MapTransform:
uint32 transform_type
# Transform type can be:
# 1 - transform position from metric to map pixel coordinates
# 2 - transform size from metric to map pixel size
# 3 - transform position from pixel to metric coordinates
# 4 - transform size from pixels on map to metric
geometry_msgs/Point world_coordinates (input point in metric - types 1 and 2 or pixels - types 3
and 4)
---
geometry_msgs/Point map_coordinates (output point in pixels - types 1 and 2 or metric - types 3
and 4)
It also publishes the position of the robot in pixel map coordinates on topic /maptf.
Assisted Detection Node
The Care-O-Bot assisted detection feature enables the user to initiate the detection process on selected region and choose the right object in the UI_PRI, when multiple detections are available. This is implemented via the assisted detector node in the Mixed reality server, BB Estimator and srs_assisted_detection stack. Assisted Detector node This node operates as an ROS service /srs_assisted_detector called from the UI_PRI via the ROSBridge. The service have the following input parameters:
uint32 operation # 1 UIDetection, 2 BBMove, 3 Test
std_msgs/String name # Name of the detected object
int16[2] p1 # x1,y1 coordinates of the selected region in the UI /operation mode 2/
int16[2] p2 # x2,y2 coordinates of the selected region in the UI /operation mode 2/
The service response is:
string[] object_name # Array of the object labels detected
uint32[] x1 # Array of the object x1 coordinates detected
uint32[] y1 # Array of the object y1 coordinates detected
uint32[] x2 # Array of the object x2 coordinates detected
uint32[] y2 # Array of the object y2 coordinates detected
std_msgs/String message # response from the BBMove if any
Operation modes
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 30 of 128
1. Operation mode 1 - UIDetect The user start the detection process in the UI_PRI via clicking on the Detect button. The service /srs_assisted_detector is called. It triggers the call to UIDetect service and converting the received array of detected objects from 3d bounding boxes to camera pixel coordinates via the BB Estimator Alt service. The output of the detected objects with the corresponding pixel coordinates is then passed back to the UI_PRI. After that an 2d rectangle around the object is displayed in the UI video. The user can select particular object and its id is passed via the UIAnswer service.
2. Operation mode 2 – if the detection is not successful the user can select an region of interest and the robot will be repositioned to allow better detection. The /srs_assisted_detector service is called with operation value 2 and p1,p2 coordinates of the selected points. The assisted detector calls BB Estimator to transform the coordinates from 2d to 3d bounding box and then calls the BBMove service. The response of the BBMove service is then passed to the UI. And if it possible the BBMove service repositions the robot to enable better detection.
3. Operation mode 3 – This mode is for test purpose only. It allows to test if the Assisted detector node is running, the communication path and the video overlay augmentation in the UI_PRI. In this mode the service /srs_assisted_detector returns coordinates and name of a test object - Milk box.
Testing
You can test the output result using the UI_PRI on iPad or using a standard websockets compliant browser – as Google Chrome.
To view a snapshot - please open:
http://127.0.0.1:8080/snapshot?topic=/map
To view the live stream - please open:
http://127.0.0.1:8080/stream?topic=/map
You can monitor any ROS image topic for example:
http://127.0.0.1:8080/stream?topic=/stereo/right/image_raw
Note: replace 127.0.0.1 with the IP of your ROS server where you run the MRS.
3.3.3 KNOWLEDGE BASE (SRS_KNOWLEDGE)
ROS API
Implementation of ros service for a semantic database (SDB) using Java (rosjava_jni) and Jena/pellet.
knowledge_srs_node Services task_request (srs_knowledge/TaskRequest)
When there is a new task, send this request. A new session id will be generated used for following action
generation
plan_next_action (srs_knowledge/PlanNextAction)
To get the next action that the robot needs to execute
query_sparql (srs_knowledge/QuerySparQL)
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 31 of 128
To send a sparql query directly to the database and return the result in JSON format (mainly for testing
purpose)
get_objects_on_map (srs_knowledge/GetObjectsOnMap)
To query in semantic Database (SDB) for all movable or graspable objects, such as MilkBox0. All object
names are unique and case sensitive as defined in the owl file.
get_workspace_on_map (srs_knowledge/GetWorkspaceOnMap)
To query in SDB for all workspace or furniture objects, such as table, fridge. All object names are unique
and case sensitive as defined in the owl file
get_rooms_on_map (srs_knowledge/GetRoomsOnMap)
To query in SDB for all rooms. All names are unique and case sensitive as defined in the owl file
get_predefined_poses (srs_knowledge/GetPredefinedPoses)
To query in SDB for all poses that are predefined, such as home, charging_position, etc.
get_workspace_for_object (srs_knowledge/GetWorkspaceForObject)
To query in SDB for all possible workspaces or furniture pieces that could store a particular object. E.g.
book_shelf and desk could be the workspace for a book.
insert_instance (srs_knowledge/InsertInstance) delete_instance (srs_knowledge/DeleteInstance) update_pos_info (srs_knowledge/UpdatePosInfo) Types of objects
Two types of objects are defined here. Graspable or movable objects, and Workspace or furniture objects.
graspable or movable objects
possible actions:
move - move to the furniture which holds the object
grasp - grasp the object
Furniture objects
Similarly, furniture objects are considered not movable (though can be updated in the database as well, but having different actions accordingly)
There is no direct action for furniture pieces, as currently fetch and carry tasks only focus on
graspable objects, such as book, milkbox, etc.
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 32 of 128
Workspaces are places where objects are located. The need to distinguish this and graspable
objects is to use the semantic relationship between objects, e.g. bookshelf is a workspace of
books, and table for milkbox etc.
To execute a task such as get a milkbox, the robot would search for the object from possible
locations, such as table, fridge, and even oven top.
The user can also specify where the possible workspaces are, to reduce the search space or to
improve the efficiency, if the user knows better the environment.
Possible actions
environment_update - update the pose information of the targeted object
workspace verification - verify if the workspace exists at the given pose.
Services /task_request and /plan_next_action
These two services are used for task planning to plan the next action the robot should execute.
A python script is prepared
srs_knowledge/src/demoplanaction.py
for the purpose of testing.
Here is a list of some unit tests, simulating different conditions of the robot task execution.
Task: Move
For demostration purpose, part of the test script is copied here.
def test_move():
res = requestNewTaskJSONMove()
sessionId = res.sessionId
acts = list()
act = planNextActionServiceJSON(sessionId, 0, '')
acts.append(act)
act = planNextActionServiceJSON(sessionId, 0, '')
acts.append(act)
return acts
def planNextActionServiceJSON(sessionId, result, jsonFeedback):
print 'Plan next Action service'
rospy.wait_for_service('plan_next_action')
try:
next_action = rospy.ServiceProxy('plan_next_action', PlanNextAction)
req = PlanNextActionRequest()
req.sessionId = sessionId
req.resultLastAction = result
req.jsonFeedback = jsonFeedback
resp1 = next_action(req)
return resp1.nextAction
except rospy.ServiceException, e:
print "Service call failed: %s"%e
To Run it:
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 33 of 128
$ rosrun srs_knowledge demoplanaction.py
Request new task
### Task details are in JSON
send task request
{"time_schedule":1263798000000,"task":"move","destination":{"predefined_pose":"home"}}
### First action to accomplish the task is to “move”
[status: 0
generic:
jsonActionInfo: {"action":"move","destination":{"pose2d":{"theta":0.0,"y":0.0,"x":0.0 } } }
actionType: generic, status: 1
## Next action to accomplish the task is “finish_success” (given the above "move" action is
completed successfully)
generic:
jsonActionInfo: {"action":"finish_success"}
actionType: generic]
Communication is mainly based on the JSON format, in order to transfer different data
types. The protocol of JSON is specified in the following section.
If test_move() is changed to:
def test_move():
res = requestNewTaskJSONMove()
sessionId = res.sessionId
acts = list()
act = planNextActionServiceJSON(sessionId, 0, '')
acts.append(act)
act = planNextActionServiceJSON(sessionId, 1, '') ### last action is not completed
successfully
acts.append(act)
return acts
To Run it:
$ python demoplanaction.py
Request new task
send task request
{"time_schedule":1263798000000,"task":"move","destination":{"predefined_pose":"home"}}
### First action to accomplish the task is to “move”
[status: 0
generic:
jsonActionInfo: {"action":"move","destination":{"pose2d":{"theta":0.0,"y":0.0,"x":0.0} } }
actionType: generic, status: -1
### Next action to accomplish the task is “finish_fail” (because the robot could not move to the
target)
generic:
jsonActionInfo: {"action":"finish_fail"}
actionType: generic]
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 34 of 128
The test can be performed using the script by changing the parameters accordingly. Here, only successful actions are presented for demonstration purposes to show the action sequence of a Get task.
Task: Get/Fetch
$ python demoplanaction.py
Test FETCH task
Request new task
send task request
{"time_schedule":1263798000000,"task":"get","deliver_destination":{"predefined_pose":"order"},"ob
ject":{"object_type":"Milkbox"},"grasping_type":"Simple"}
[status: 0
### Move to target where the target object could be nearby
generic:
jsonActionInfo:
{"action":"move","destination":{"pose2d":{"theta":0.0,"y":0.0591675355492316,"x":-
2.2000000357627867} } }
actionType: generic, status: 0
### Detect the object
generic:
jsonActionInfo:
{"action":"detect","object":{"object_type":"Milkbox","workspace":"Dishwasher0","object_id":9}}
actionType: generic, status: 0
### Move to a closer position after detected the object
generic:
jsonActionInfo: {"action":"move","destination":{"pose2d":{"theta":0.06921684521207505,"y":-
0.044910181658103565,"x":-2.208831782585601} } }
actionType: generic, status: 0
### Grasp the object
generic:
jsonActionInfo:
{"action":"grasp","object":{"object_type":"Milkbox","workspace":"Dishwasher0","object_id":9}}
actionType: generic]
Tasks Fetch and Search are similar to Get (refer to srs_decision_making), hence not listed here. The test clients allow using different combinations of the parameters as input to make sure the correctness of the result. So far, all tests provide correct result for task planning with this unit test approach.
3.3.4 GRASPING (SRS_GRASPING)
ROS API
This API is for grasp tasks with the SDH (Schunk Dextrous Hand). It provides tools to generate/simulate different grasping solutions.
Resources
To obtain the grasping solutions there are 2 services. The get_pregrasps service is exclusive for the srs_ui_but.
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 35 of 128
Services
get_db_grasps ( srs_grasping/GetDBGrasps)
Reads the grasping configurations from the database. If the info is not stored, it tries to generate it.
get_feasible_grasps ( srs_grasping/GetFeasibleGrasps)
Returns the feasible grasping configurations (in /base_link coordinates system) for a given object ID and object pose.
get_pregrasps ( srs_grasping/GetPreGrasp)
Return pregrasp configurations for a given position.
Files
Simulate the generated grasp configurations in the OpenRAVE simulator.
scripts/test_simulation.py
A grasp task example.
scripts/test_simplegrasp.py
Obtains grasp configurations for a given object.
script/test_generator.py
This files contains all the methods used in the grasping tools.
src/databaseutils.py
src/operaveutils.py
src/graspingutils.py
This one contains an instance of each one:
src/grasping_functions.py
The services.
src/get_db_grasps.py
src/get_feasible_grasps.py
src/get_pregrasps.py
This file launch all the services.
launch/grasping_services.py
This file contains the mesh of the hand. It's needed to calculate grasps and other tasks.
robos/care-o-bot3.zae
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 36 of 128
3.3.5 SIMPLE SCENARIO (SRS_SCENARIOS)
srs_scenarios Testing SRS interface and interactions under with command consoles.
1. robot stands at zero position 2. robot search in room for milkbox based on common knowledge from semantic DB 3. robot perceive object information based on environment perception and object detection 4. robot adjust position autonomously based on decision making and symbolic grounding 5. robot grasp the object and transfer the object to the tray 6. robot brings milkbox to the sofa in the IPA kitchen
3.3.6 HUMAN SENSING (SRS_LEG_DETECTOR)
srs_human_sensing
SRS human sensing uses the laser range finders of the Care-O-bot to locate a person around the robot.
The results of the detection, i.e. the coordinates of the tracked people are published to a topic,i.e /tracked_people
3.3.7 PRIVATE USER INTERFACE (SRS_UI_PRI)
The installation/configuration of this component has been tested as functional working well. UI_PRI has been tested on several devices. 3.3.8 LOCAL USER INTERFACE (SRS_UI_LOC)
UI_LOC has been tested on several devices:
Device name Android version rooted supported
Samsung Galaxy SII 2.3.5 yes full support
easypix Easypad 1370 2.3.1 no full support
HTC Sense 3.0 2.3.5 no full support
Huawei G300 2.3.6 no no support at all
HTC Sense 3.6 4.0 yes skype and editing etc/hosts does not work
3.3.9 ASSISTED ARM NAVIGATION (SRS_ASSISTED_ARM_NAVIGATION)
Overview
Assisted arm navigation package offers similar functionality as the Warehouse Viewer - an interactive (collision free) arm motion planning. It has been designed for Care-O-Bot within the SRS project, but can be easily modified for any robot running the arm_navigation stack. It enables a user to start the arm planning through RVIZ plugin with a simple interface. The goal position of the end effector can be set by a 6 DOF Interactive Marker or by using SpaceNavigator device. For collision free trajectory planning a collision map produced by Environment Model is used.
Current version has been tested only with ROS Electric.
Assisted arm navigation is divided into following packages:
srs_assisted_arm_navigation: Main functionality.
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 37 of 128
srs_assisted_arm_navigation_msgs: Definition of services and action interface.
srs_assisted_arm_navigation_ui: RVIZ plugin.
For example of how this functionality can be integrated into more complex system, please take a look on srs_arm_navigation_tests, where is the integration into SRS structure in form of SMACH generic states implemented.
Screenshots
This is how it looks in RVIZ when user starts arm planning. There is 6 DOF interactive marker, marker representing arm (green) and marker for the object to be grasped.
Collision with environment or with the object is clearly indicated as well as the situation when desired goal position is out of reach.
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 38 of 128
When the trajectory is planned, user can play its animation several times and decide if it's reasonable and safe.
User interface consists of few controls and contains description of the task for user (if using action interface to give tasks to user).
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 39 of 128
ROS API
Assisted arm navigation node communicates with user interface using set of services. There are also some services for adding collision objects etc. The most important is action interface which can be used to ask user to perform some task. Actionlib interface Action Subscribed Topics
/but_arm_manip/manual_arm_manip_action/goal (srs_assisted_arm_navigation_msgs/ManualArmManipActionGoal)
A task for user.
/but_arm_manip/manual_arm_manip_action/cancel (actionlib_msgs/GoalID)
A request to cancel given task.
Action Published Topics
/but_arm_manip/manual_arm_manip_action/feedback (srs_assisted_arm_navigation_msgs/ManualArmManipActionFeedback)
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 40 of 128
Feedback contains the current state of the task.
move_base/status (actionlib_msgs/GoalStatusArray)
Provides status information on the goals that are sent to the assisted_arm_navigation
action. /but_arm_manip/manual_arm_manip_action/result (srs_assisted_arm_navigation_msgs/ManualArmManipActionResult)
Result contains information about task from user (succeeded, failed etc.).
Topics, services, parameters Subscribed Topics /spacenav/joy (sensor_msgs/Joy)
tbd
/spacenav/offset (geometry_msgs/Vector3)
tbd
/spacenav/rot_offset (geometry_msgs/Vector3)
tbd
Published Topics /but_arm_manip/state (srs_assisted_arm_navigation_msgs/AssistedArmNavigationState)
A state of arm navigation.
Services /but_arm_manip/arm_nav_new (srs_assisted_arm_navigation_msgs/ArmNavNew)
Called from user interface, when there is a request for new trajectory planning.
/but_arm_manip/arm_nav_plan (srs_assisted_arm_navigation_msgs/ArmNavPlan)
Plan from current position to the goal position.
/but_arm_manip/arm_nav_play (srs_assisted_arm_navigation_msgs/ArmNavPlay)
Visualize trajectory.
/but_arm_manip/arm_nav_execute (srs_assisted_arm_navigation_msgs/ArmNavExecute)
Execute trajectory.
/but_arm_manip/arm_nav_reset (srs_assisted_arm_navigation_msgs/ArmNavReset)
Cancel current planning.
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 41 of 128
/but_arm_manip/arm_nav_success (srs_assisted_arm_navigation_msgs/ArmNavSuccess)
Task was successful (user pressed "Success" button).
/but_arm_manip/arm_nav_failed (srs_assisted_arm_navigation_msgs/ArmNavFailed)
User was not able to finish given task.
/but_arm_manip/arm_nav_refresh (srs_assisted_arm_navigation_msgs/ArmNavRefresh)
Refresh planning scene.
/but_arm_manip/arm_nav_coll_obj (srs_assisted_arm_navigation_msgs/ArmNavCollObj)
Add bounding box of object to the planning scene.
/but_arm_manip/arm_rem_coll_obj (srs_assisted_arm_navigation_msgs/ArmNavRemoveCollObjects)
Remove all collision objects.
/but_arm_manip/arm_nav_set_attached (srs_assisted_arm_navigation_msgs/ArmNavSetAttached)
Set collision object to be attached.
/but_arm_manip/arm_nav_move_palm_link (srs_assisted_arm_navigation_msgs/ArmNavMovePalmLink)
Move virtual end effector to some absolute position.
/but_arm_manip/arm_nav_move_palm_link_rel (srs_assisted_arm_navigation_msgs/ArmNavMovePalmLinkRel)
Move virtual end effector relatively.
/but_arm_manip/arm_nav_switch_aco (srs_assisted_arm_navigation_msgs/ArmNavSwitchACO)
Enable/disable artificial collision object attached to gripper.
/but_arm_manip/arm_nav_repeat (srs_assisted_arm_navigation_msgs/ArmNavRepeat)
User pressed "Repeat" button (if it was allowed by action).
/but_arm_manip/arm_nav_step (srs_assisted_arm_navigation_msgs/ArmNavStep)
Undo / redo.
/but_arm_manip/arm_nav_stop (srs_assisted_arm_navigation_msgs/ArmNavStop)
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 42 of 128
Stop execution of trajectory.
Parameters ~arm_planning/arm_planning/inflate_bb (double, default: "1.0")
Bounding box of each object inserted into planning scene will be inflated by this factor.
~arm_planning/world_frame (string, default: "map")
Planning will be performed in this coordinate system.
~arm_planning/end_eff_link (string, default: "sdh_palm_link")
End effector link.
~arm_planning/joint_controls (bool, default: "false")
Enable/disable interactive markers for all joints.
~arm_planning/make_collision_objects_selectable (bool, default: "false")
If the object inserted into the planning scene should be selectable or not.
~arm_planning/aco/link (string, default: "arm_7_link")
Artificial collision object (when enabled) will be attached to this link.
~arm_planning/aco/default_state (bool, default: "false")
Default state of the artificial collision object.
~spacenav/enable_spacenav (bool, default: "true")
Enables usage of Space Navigator control.
~spacenav/use_rviz_cam (bool, default: "true")
Enables usage of RVIZ camera position to make control of end effector in user perspective.
Camera position must be published as TF transformation. There is plugin for publishing this in
srs_env_model_ui package.
~spacenav/rviz_cam_link (string, default: "rviz_cam")
TF frame for RVIZ camera.
~spacenav/max_val (double, default: "350.0")
Maximal value for data from Space Navigator. Higher values will be limited.
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 43 of 128
~spacenav/min_val_th (double, default: "0.05")
Threshold for normalized values (current / max_val). Must be in <0.0, 0.5> range.
~spacenav/step (double, default: "0.1")
Step for changes of end effector interactive marker position.
~arm_links (list of strings, default: "cob3-3 arm links")
List of arm links.
~set_planning_scene_diff_name (string, default: "environment_server/set_planning_scene_diff")
Service for communication with Environment Server.
~left_ik_name (string, default: "cob3_arm_kinematics/get_constraint_aware_ik")
Constraint aware IK service.
~planner_1_service_name (string, default: "ompl_planning/plan_kinematic_path")
Planner service name.
~trajectory_filter_1_service_name (string, default: "trajectory_filter_server/filter_trajectory_with_constraints")
Trajectory filter service name.
~vis_topic_name (string, default: "planning_scene_visualizer_markers")
Topic for publishing visualization markers.
~left_ik_link (string, default: "arm_7_link")
End effector link for which we perform IK.
~left_arm_group (string, default: "arm")
Arm group.
~execute_left_trajectory (string, default: "arm_controller/follow_joint_trajectory")
Arm controller action.
3.3.10 INTERACTION PRIMITIVES (SRS_INTERACTION_PRIMITIVES)
Overview
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 44 of 128
This package introduces GUI Primitives for HRI that are designed to visualize and illustrate objects detected by the robot. These primitives also allow to interact with particular objects and give commands to the robot via them. The primitives consists of Interactive Markers and are visualized in RViz.
All primitives have a context menu which allows some interaction and setting of the visualization (ability to show description of the primitive, add/remove manipulators for rotation and translation, etc.).
Following predefined types of primitives and are specified in srs_env_model/PrimitiveType message.
Interaction Primitives Billboard
Billboard represents a real world object which is hard to describe with some mesh. The billboard is view facing and i s type is specified in srs_interaction_primitives/BillboardType
message.
The billboard can also illustrate movement of the represented object, e.g. walking person.
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 45 of 128
Bounding Box
Bounding Box illustrates dimensions of the object. The bounding box is able visualize its dimensions as text labels. The bounding box can be manually translated or rotated.
Object
Object represents a detected or real-world object which has its mesh in an object database. Possible pre-grasp positions can be shown around the object. The visualization of pre-grasp
positions aids operator to move gripper to a correct position for the grasping. The object can show its bounding box (if specified). The object can be manually rotated, translated and scaled in the scene.
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 46 of 128
Unknown Object
Unknown Object represents obstacles, unreachable objects or dangerous places in the scene so that the operator will be able to avoid collisions or crashes of the robot.
The unknown object can be manually rotated, translated and scaled in the scene.
Plane
Plane shows only a simple plane which can be tagged as table desk, wall, etc.
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 47 of 128
Plane Polygon
It's a variation of the Plane which is defined as a planar polygon. The plane polygon is shown as a transparent plane with the polygon inside it.
Others Clickable positions
Clickable positions can be added to the scene using /clickable_positions service or ClickablePositions action.
By clicking on the position are it's coordinates published on the specified topic.
Robot's pose prediction
For visualization of robot's predicted movement positions is provided service /robot_pose_prediction. Position are visualized using Markers.
Service takes 2 arguments:
geometry_msgs/Pose[] positions - predicted positions ttl - markers lifetime
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 48 of 128
ROS API Nodes
Node Name Published Topics Description
interaction_primitives_service_server
/interaction_primitives/primitive_name/update/pose_changed
/interaction_primitives/primitive_name/update/scale_changed
/interaction_primitives/primitive_name/update/menu_clicked
/interaction_primitives/primitive_name/update/movement_changed
/interaction_primitives/primitive_name/update/tag_changed
This node publishes interaction_primitives services.
Services
All services starts with prefix /interaction_primitives
Service Name Input Output Description
/add_billboard string frame_id string name string description uint8 type float64 velocity geometry_msgs/Quater
nion direction uint8 pose_type geometry_msgs/Pose
pose geometry_msgs/Vector
3 scale
Adds billboard to the scene
/add_bounding_box string frame_id string name string object_name string description
Adds Bounding Box to the
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 49 of 128
uint8 pose_type geometry_msgs/Pose
pose geometry_msgs/Vector
3 scale std_msgs/ColorRGBA
color
scene
/add_plane string frame_id string name string description uint8 pose_type geometry_msgs/Pose
pose geometry_msgs/Vector
3 scale std_msgs/ColorRGBA
color
Adds Plane to the scene
/add_plane_polygon string frame_id string name string description geometry_msgs/Polygo
n polygon geometry_msgs/Vector
3 normal std_msgs/ColorRGBA
color
Adds Plane Polygon to the scene
/add_object string frame_id string name string description geometry_msgs/Pose
pose geometry_msgs/Point
bounding_box_lwh std_msgs/ColorRGBA
color string resource bool use_material bool allow_pregrasp
Adds Object to the scene
/add_unknown_object string frame_id string name string description uint8 pose_type geometry_msgs/Pose
pose geometry_msgs/Vector
3 scale
Adds Unknown Object to the scene
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 50 of 128
/remove_primitive string name Removes primitive form the scene
/change_description string name string description
/change_pose string name geometry_msgs/Pose
pose
Changes primitive's description
/change_scale string name geometry_msgs/Vector
3 scale
Changes primitive's scale
/change_color string name std_msgs/ColorRGBA
color
Changes primitive's color
/change_direction string name geometry_msgs/Quater
nion direction
Changes billboard's movement direction
/change_velocity string name float velocity
Changes billboard's movement velocity
/set_pregrasp_position string name uint8 pos_id geometry_msgs/Vector
3 position
Sets pre-grasp position
/remove_pregrasp_position
string name uint8 pos_id
Removes pre-grasp position
/get_update_topic string name uint8 type
Gets update topic for specified action (type)
/set_allow_object_interaction
string name bool allow
Allows od denies interaction with
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 51 of 128
specified Object
/get_billboard string name string frame_id string name string description uint8 type float64 velocity geometry_msgs/Quater
nion direction uint8 pose_type geometry_msgs/Pose
pose geometry_msgs/Vector
3 scale
Gets billboard
/get_bounding_box string name string frame_id string name string object_name string description uint8 pose_type geometry_msgs/Pose
pose geometry_msgs/Vector
3 scale std_msgs/ColorRGBA
color
Ges Bounding Box
/get_plane string name string frame_id string name string description uint8 pose_type geometry_msgs/Pose
pose geometry_msgs/Vector
3 scale std_msgs/ColorRGBA
color
Gets Plane
/get_object string name string frame_id string name string description geometry_msgs/Pose
pose geometry_msgs/Point
bounding_box_lwh std_msgs/ColorRGBA
color string resource bool use_material
Gets Object
/get_unknown_object string name string frame_id string name
Gets Unknow
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 52 of 128
string description uint8 pose_type geometry_msgs/Pose
pose geometry_msgs/Vector
3 scale
n Object
/clickable_positions string frame_id string topic_suffix float64 radius std_msgs/ColorRGBA
color geometry_msgs/Point[]
positions
string topic Shows clickable positions
/robot_pose_prediction
geometry_msgs/Pose[] positions
float64 ttl
Shows predicted positions of robot's movement
Messages
Msg Name Content Description
PrimitiveType content Types of Primitives
BillboardType content Types of Billboard
PoseType content Specifies if the coordinates are in the center or in the base of the primitive
MenuClicked content Message for update topic
MovementChanged content Message for update topic
PoseChanged content Message for update topic
ScaleChanged content Message for update topic
TagChanged content Message for update topic
PositionClicked content Message for position clicked topic
Published topics
All topics starts with prefix interaction_primitives.
Topic Name Message Description
/primitive_name/update/pose_changed srs_interaction_primitives/PoseChanged Publishes pose changes of the primitive
/primitive_name/update/scale_changed srs_interaction_primitives/ScaleChanged Publishes scale changes of the primitive
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 53 of 128
/primitive_name/update/menu_clicked srs_interaction_primitives/MenuClicked Publishes information about menu interaction of the primitive
/primitive_name/update/movement_changed
srs_interaction_primitives/MovementChanged
Publishes movement changes of the primitive (Billboard only)
/primitive_name/update/tag_changed srs_interaction_primitives/TagChanged Publishes tag changes of the primitive (Plane only)
/clickable_positions/topic_suffix srs_interaction_primitives/PositionClicked Publishes position which was clicked
3.3.11 ENVIRONMENT MODEL (SRS_ENV_MODEL)
Overview
This package provides a new Dynamic Environment Model server partially based on OctomapServer from the octomap_mapping stack. The model manages information about all detected, permanent, temporary and moving objects in the environment. It publishes own topics, subscribes to some of input data stream topics and it's internal state can be controlled by services. We have created some simple tutorials that walk you through basic using it.
System architecture
The Dynamic Environment Model (DEM) serves as a node containing octomap of the scene viewed by sensors and additional data types storing detected objects.
Environment model server is based on core-plugin architecture model. Whole this structure can be virtually divided into two parts - plugins connected to the octomap and independent data structures. The first part serves as a "raw data" processor. It tranforms input octomap to the different data structure. The second part of the environment server contains some "higher level" data structures, added by some detectors in general.
Raw data structures o Octo map o Collision map o Collision grid o Collision objects
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 54 of 128
o 2D map o Marker array o Point cloud
Other data structures o Object tree
Other plugins o Compressed point cloud plugin o Example plugin o Limited point cloud plugin
Octo map plugin
Contains octomap data - octomap tree with own node type (octomap::EModelTreeNode). Can be subscribed to the point cloud publishing topic to fill the map. This plugin is used by some other plugins as a data source.
The octomap plugin also supports RGB colouring of the point cloud based on the data from RGB camera. Other provided functionality is octree filtering used to remove speckles and faster noise removal in the view cone.
Collision map plugin
This plugin converts occupied octomap nodes to the collision map (arm_navigation_msgs::CollisionMap). Diameter can be set to limit robot collision distance (this leads to smaller data flow).
Collision grid plugin
This plugin also converts occupied octomap nodes to the other form - occupancy grid (nav_msgs::OccupancyGrid). Used tree depth, map x and y direction sizes can be set.
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 55 of 128
Collision objects plugin
It converts and publishes occupied octomap nodes to the collision objects (rm_navigation_msgs::CollisionObject).
Compressed point cloud plugin
This plugin takes robot camera position and uses it to filter currently visible part of octomap. This is converted to the point cloud form and it is sent as differential frame. The compressed_pc_publisher node can be used to combine differential frames and to publish complete octomap again (in the usuall point cloud form). This indirectness can be used to lower data flow.
2D map plugin
It converts and publishes octomap nodes to the 2D map (nav_msgs::OccupancyGrid).
Marker array plugin
It converts and publishes octomap nodes to the marker array (visualization_msgs::MarkerArray).
Point cloud plugin
It can convert incomming point cloud (sensor_msgs::PointCloud2) frame id to some other and re-publishes it.
Object tree plugin
This plugin is used to store objects detected by various perceptors. These objects are stored in octree structure and they are represented by their position and other needed parameters instead of point cloud. Objects can be visualized with srs_interaction_primitives package.
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 56 of 128
Currently plugin supports bounding boxes and planes detected by tools in the srs_env_model_percp package. All plugin functionality is provided by services.
Compressed point cloud plugin
This plugin publishes only visible part of the environment model. This cloud can be used as live sight on octomap data or can be used together with compressed point cloud publisher to transfer octomap to the other device.
Compressed point cloud publisher node
This node can be used to assembly and publish complete octomap from differential frames published by compressed point cloud plugin.
ROS API Nodes
Node Name Published Topics Description
but_server_node /but_env_model/binary_octomap /but_env_model/pointcloud_centers /but_env_model/collision_object /but_env_model/markerarray_object /but_env_model/map2d_object /but_env_model/collision_map
Main server node.
compressed_pc_publisher /input Differrential frames completion and complete octomap publishing
Services
All services are published in the /but_env_model/ namespace.
Service Name Input Output Description
server_reset Reset server
server_pause Pause server
server_use_input_color bool Set true (default) if input color information should be stored and used.
get_collision_map int32 - local version stamp id
arm_navigation_msgs/CollisionMap
Get current collision map.
insert_planes srs_env_model_msgs/PlaneArray plane_array
Add array of planes
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 57 of 128
reset_octomap Reset only octomap data.
load_octomap string - input file name bool - true if all ok Tries to load octomap file (without header) as a new env. model
load_octomap_full string - input file name bool - true if all ok Tries to load octomap file (with header) as a new env. model
save_octomap string - output file name bool - true if all ok Tries to store current env. model as octomap file (without header)
save_octomap_full string - output file name bool - true if all ok Tries to store current env. model as octomap file (with header)
add_cube_to_octomap geometry_msgs::Pose - cube position, geometry_msgs::Pose - cube size
Set all cells inside given box as occupied
remove_cube_from\n_octomap
geometry_msgs::Pose - cube position, geometry_msgs::Pose - cube size
Set all cells inside given box as free
lock_collision_map bool - true if lock Lock collision map - map will not be updated from octree. Should be
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 58 of 128
called before manual modifications.
is_new_collision_map time - time stamp of local data
bool - true if new data present, time - timestamp of new data
Service can be used to test if new collision map data is present
add_cube_to_collision_map geometry_msgs::Pose - cube position, geometry_msgs::Pose - cube size
Set all collision maps cells inside given box as occupied
remove_cube_from\n_collision_map
geometry_msgs::Pose - cube position, geometry_msgs::Pose - cube size
Set all collision map cells inside given box as free
set_crawl_depth uint8 - tree depth Set tree depth used for publishing
get_tree_depth uint16 - tree depth Get octomap tree depth
Object tree plugin services
Services provided by Object tree plugin can be divided into two parts. The first one is common for all saved objects. Services in the other one have variants for all supported objects. All services are published in the /but_env_model/ namespace.
Service Name Input Output Description
get_objects_in_box geometry_msgs/Point position
geometry_msgs/Vector3 size
uint32[] object_ids
Returns ids of objects inside a box.
get_objects_in_halfspace
geometry_msgs/Point position
geometry_msgs/Vector3 normal
uint32[] object_ids
Returns ids of objects inside a halfspace defined by plane.
get_objects_in_sphere geometry_msgs/Point position
float32 radius
uint32[] object_ids
Returns ids of objects inside a sphere.
remove_object uint32 object_id Removes object from
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 59 of 128
object tree.
show_object uint32 object_id Shows object using a srs_interaction_primitives server.
show_objtree Show octree structure using a Marker.
The following services are available for these object types: plane, aligned_box and bounding_box. All services are published in the /but_env_model/ namespace.
Service Name Input Output Description
get_{object_type} uint32 object_id object_description Gets object from object tree.
get_similar_{object_type} object_description uint32 object_id Checks object tree for object similar to object in input parameter. Returns -1 if no such exists.
insert_{object_type} object_description uint32 object_id Inserts object into object tree. Replaces object with same id if exists.
insert_{object_type}_by_position object_description uint32 object_id Inserts object into object tree. Replaces similar object if exists.
Published topics
List of all published topics. All topics are published in the /but_env_model/ namespace.
Topic Name Message Description
visualization_marker visualization_msgs/Marker Visualization markers
collision_object arm_navigation_msgs/CollisionObject Octomap data - occupied nodes - as a collision objects
pointcloud_centers sensor_msgs/PointCloud2 Octomap data - occupied nodes - as a point cloud
collision_map arm_navigation_msgs/CollisionMap Diameter limited octomap converted to the collision map
marker_array_object visualization_msgs/MarkerArray Octomap data as a marker array
map2d_object nav_msgs/OccupancyGrid Whole octomap converted to the 2D map
binary_octomap octomap_ros/OctomapBinary Binary octomap data - whole tree
visible_pointcloud_centers sensor_msgs/PointCloud2 Octomap data - occupied nodes visible from the rviz camera - as a point cloud
3.3.12 ENVIRONMENT PERCEPTION (SRS_ENV_MODEL_PERCP)
System architecture
The package consists of three major components:
BB Estimator - ROS service performing rough bounding box estimation of an object inside a specified 2D region of interest using the Kinect depth data.
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 60 of 128
Depth map segmentation - ROS service providing Kinect depth map segmentation (or other, image based depth map) using several approaches, such as maximum normal/depth difference, plane prediction etc.
Plane Fitting - ROS service providing plane fitting in point cloud data based on 3D Hough Transform. Node is able to iterate through several frames while adding data into current point cloud and refining detected planes.
Depth map segmentation
The depth map segmentation node's main purpose is to divide Kinect depth image into several regions of interest. It is supposed to be the preprocessing task for 3D environment creation but also for the other possible detectors (object detection etc.). For depth map segmentation itself, several algorithms are possible to use:
Depth based segmentation
The depth based segmentation method is the fastest segmentation method due to the nature of segmentation – simple depth difference of neighboring pixels. Even if this is not proper plane detector, the usability as a preprocessing unit it recommended due to very high efficiency. The scene is divided to multiple depth differing segments, which pose a very good input for plane detectors.
Normal based segmentation
Normal based segmentation method uses similar principle as depth based method with the difference that this algorithm uses normal difference for gradient image creation.
Depth and normal fused segmentation
Having the outputs from depth based and normal based segmentation methods, the later fusion can be applied to produce more accurate and stable final results.
Plane prediction segmentation
Plane prediction method emerges from the original need for plane detector pre-processing algorithm. It applies the same method as segmentation above – gradient image followed by watershed segmentation method, however the process of gradient map computation is different. The gradient map computation has two outputs – edge strengths gradient image and number of changes gradient image .
Tiled RANSAC segmentation
Last of implemented methods approaches to the plane segmentation from a different way. Planes are searched using RANSAC search. Tiled RANSAC has proven to be fast enough to be used in real-time systems because of small random sample search. On the other hand, the possibility of filling regions outside the tile borders ensures that found planes are marked on the whole depth image.
Plane Fitting
Plane fitting component constructs iteratively the knowledge of surrounding scene by creating plane segment information. Plane detection itself is constructed by analysing Hough Space created by iteratively adding Hough transform information from several frames.
Altough the Hough transform main disadvantage is high memory requirements, our method has
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 61 of 128
overcame these drawbacks by implementation of hierarchic adaptive structure, which keeps memory requirements as low as possible while speeding up the computation.
The output of this node is a list of detected planes sent in a message. Each plane is defined in two ways - plane equation and bounding rectangle (in 3D) for visualisation purposes.
ROS API Nodes
Node Name Published Topics Description
but_segmenter /but_env_model/seg_region_image;/but_env_model/seg_deviation_image
Node implementing depth image segmenter. Publishes region image (image with region index information) and sometimes deviation image (image with computed std. deviations of detected planes). Please see below when these topics are published.
but_plane_detector
/but_env_model/plane_array Node implements described plane detector. It publishes a plane array message, which is represented by the list of found planes.
Services
Service Name Input Output Description
/but_env_model/clear_planes - Message about the result
Called service clears the accumulator of planes and Hough Space, which is constructed iteratively from all arriving point clouds. After calling this service the node is reset and continues detecting planes from scratch.
Published topics
Topic Name Message Description
/but_env_model/seg_region_image
sensor_msgs::Image
A 16bit unsigned short image of segmented indices. Indices start with index 1, if index is less than 1, it signifies a border or not segmented region. It is published by every segmentation method.
/but_env_model/seg_deviation_image
sensor_msgs::Image
A region image with computed depth std. deviations from plane approximated through every region. It is NOT published by Depth segmentation method (Normals are not computed due to speed reasons).
/but_env_model/plane_array
srs_env_model::InsertPlanes
Plane array of srs_env_model_msgs::PlaneDes with each plane information. Each plane has INSERT/MODIFY/DELETE flag, which specifies action.
3.3.13 SPACENAV TELEOP (COB_SPACENAV_TELEOP)
Overview
Package contains node for teleoperating Care-O-Bot or any other holonomic robot by Space Navigator 3D mouse.
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 62 of 128
Control input from SN is recomputed to make robot teleoperation in user perspective. This helps to lower mental load of an operator and leads to very intuitive way of interaction. To enable this functionality position of RVIZ camera must be published as TF transformation for instance, by plugin from srs_env_model_ui.
Node can publish to two Twist topics. Normally it publish to first one (safe topic in case of Care-O-Bot) and when the left button is pressed on SN, it publishes to second topic (unsafe in case of Care-O-Bot). When right button is pressed once, robot control is switched to the robot perspective. Next press will switch control back to default mode.
Teleop node also offers axis preference which can be useful for some robot bases. This means if user tries to command robot to move in x axis direction for instance, then in some cases (depending on the parameters) small movements in other axes can be ignored.
ROS API Subscribed Topics /spacenav/offset (geometry_msgs/Vector3)
Space Navigator topic.
/spacenav/rot_offset (geometry_msgs/Vector3)
Space Navigator topic.
/spacenav/joy (sensor_msgs/Joy)
Space Navigator topic.
Published Topics /cmd_vel_safe (geometry_msgs/Twist)
Commands are published to this topic by default.
/cmd_vel_unsafe (geometry_msgs/Twist)
Commands are published to this topic while the left button is pressed.
Services ~disable (std_srvs/Empty)
Disables SN teleop so, it can be used as a control for different task.
~enable (std_srvs/Empty)
Enables previously disabled SN teleop.
Parameters ~max_vel_x (double, default: "0.3")
Maximal translational velocity in "x" axis.
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 63 of 128
~max_vel_y (double, default: "0.2")
Maximal translational velocity in "y" axis.
~max_vel_th (double, default: "0.3")
Maximal rotational velocity in "z" (theta) axis.
~spacenav/max_val (double, default: "350.0")
Maximal value on offset/rot_offset topic.
~spacenav/min_val_th (double, default: "0.05")
Lower values (normalized to 0-1 range) will be ignored.
~spacenav/ignore_th_high (double, default: "0.9")
Upper threshold for axis preference. If user's effort' to move robot in any axis is higher than this threshold and in all other axes lower then ignore_th_low, then one axis will be prefered and others ignored - setted to zero.
~spacenav/ignore_th_low (double, default: "0.1")
Lower threshold for axis preference.
~spacenav/instant_stop_enabled (bool, default: "false")
When enabled, robot can be stopped by pressing SN. Teleop will then publish zero velocities for few seconds.
~use_rviz_cam (bool, default: "false")
Enables/disables usage of RVIZ camera position to transform control commands according to it.
~rviz_cam_link (string, default: "/rviz_cam")
RVIZ camera TF frame.
3.3.14 ARM NAVIGATION TESTS (SRS_ARM_NAVIGATION_TESTS)
This package contains scripts and launch files for testing SRS assisted arm navigation pipeline. There is fake decision making which can be used instead of srs_decision_making for testing. Package also contains copy of assisted arm navigation generic states.
3.3.15 ASSISTED ARM NAVIGATION (SRS_ASSISTED_ARM_NAVIGATION)
Overview
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 64 of 128
Assisted arm navigation package offers similar functionality as the Warehouse Viewer - an interactive (collision free) arm motion planning. It has been designed for Care-O-Bot within the SRS project, but can be easily modified for any robot running the arm_navigation stack. It enables a user to start the arm planning through RVIZ plugin with a simple interface. The goal position of the end effector can be set by a 6 DOF Interactive Marker or by using SpaceNavigator device. For collision free trajectory planning a collision map produced by Environment Model is used.
Current version has been tested only with ROS Electric.
Assisted arm navigation is divided into following packages:
srs_assisted_arm_navigation: Main functionality. srs_assisted_arm_navigation_msgs: Definition of services and action interface. srs_assisted_arm_navigation_ui: RVIZ plugin.
For example of how this functionality can be integrated into more complex system, please take a look on srs_arm_navigation_tests, where is the integration into SRS structure in form of SMACH generic states implemented.
Screenshots
This is how it looks in RVIZ when user starts arm planning. There is 6 DOF interactive marker, marker representing arm (green) and marker for the object to be grasped.
Collision with environment or with the object is clearly indicated as well as the situation when desired goal position is out of reach.
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 65 of 128
When the trajectory is planned, user can play its animation several times and decide if it's reasonable and safe.
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 66 of 128
User interface consists of few controls and contains description of the task for user (if using action interface to give tasks to user).
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 67 of 128
More detailed user interface documentation can be found on its own page.
ROS API
Assisted arm navigation node communicates with user interface using set of services. There are also some services for adding collision objects etc. The most important is action interface which can be used to ask user to perform some task.
Actionlib interface Action Subscribed Topics
/but_arm_manip/manual_arm_manip_action/goal (srs_assisted_arm_navigation_msgs/ManualArmManipActionGoal)
A task for user.
/but_arm_manip/manual_arm_manip_action/cancel (actionlib_msgs/GoalID)
A request to cancel given task.
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 68 of 128
Action Published Topics
/but_arm_manip/manual_arm_manip_action/feedback (srs_assisted_arm_navigation_msgs/ManualArmManipActionFeedback)
Feedback contains the current state of the task.
move_base/status (actionlib_msgs/GoalStatusArray)
Provides status information on the goals that are sent to the assisted_arm_navigation action.
/but_arm_manip/manual_arm_manip_action/result (srs_assisted_arm_navigation_msgs/ManualArmManipActionResult)
Result contains information about task from user (succeeded, failed etc.).
Topics, services, parameters Subscribed Topics /spacenav/joy (sensor_msgs/Joy)
tbd
/spacenav/offset (geometry_msgs/Vector3)
tbd
/spacenav/rot_offset (geometry_msgs/Vector3)
tbd
Published Topics /but_arm_manip/state (srs_assisted_arm_navigation_msgs/AssistedArmNavigationState)
A state of arm navigation.
Services /but_arm_manip/arm_nav_new (srs_assisted_arm_navigation_msgs/ArmNavNew)
Called from user interface, when there is a request for new trajectory planning.
/but_arm_manip/arm_nav_plan (srs_assisted_arm_navigation_msgs/ArmNavPlan)
Plan from current position to the goal position.
/but_arm_manip/arm_nav_play (srs_assisted_arm_navigation_msgs/ArmNavPlay)
Visualize trajectory.
/but_arm_manip/arm_nav_execute (srs_assisted_arm_navigation_msgs/ArmNavExecute)
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 69 of 128
Execute trajectory.
/but_arm_manip/arm_nav_reset (srs_assisted_arm_navigation_msgs/ArmNavReset)
Cancel current planning.
/but_arm_manip/arm_nav_success (srs_assisted_arm_navigation_msgs/ArmNavSuccess)
Task was successful (user pressed "Success" button).
/but_arm_manip/arm_nav_failed (srs_assisted_arm_navigation_msgs/ArmNavFailed)
User was not able to finish given task.
/but_arm_manip/arm_nav_refresh (srs_assisted_arm_navigation_msgs/ArmNavRefresh)
Refresh planning scene.
/but_arm_manip/arm_nav_coll_obj (srs_assisted_arm_navigation_msgs/ArmNavCollObj)
Add bounding box of object to the planning scene.
/but_arm_manip/arm_rem_coll_obj (srs_assisted_arm_navigation_msgs/ArmNavRemoveCollObjects)
Remove all collision objects.
/but_arm_manip/arm_nav_set_attached (srs_assisted_arm_navigation_msgs/ArmNavSetAttached)
Set collision object to be attached.
/but_arm_manip/arm_nav_move_palm_link (srs_assisted_arm_navigation_msgs/ArmNavMovePalmLink)
Move virtual end effector to some absolute position.
/but_arm_manip/arm_nav_move_palm_link_rel (srs_assisted_arm_navigation_msgs/ArmNavMovePalmLinkRel)
Move virtual end effector relatively.
/but_arm_manip/arm_nav_switch_aco (srs_assisted_arm_navigation_msgs/ArmNavSwitchACO)
Enable/disable artificial collision object attached to gripper.
/but_arm_manip/arm_nav_repeat (srs_assisted_arm_navigation_msgs/ArmNavRepeat)
User pressed "Repeat" button (if it was allowed by action).
/but_arm_manip/arm_nav_step (srs_assisted_arm_navigation_msgs/ArmNavStep)
Undo / redo.
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 70 of 128
/but_arm_manip/arm_nav_stop (srs_assisted_arm_navigation_msgs/ArmNavStop)
Stop execution of trajectory.
Parameters ~arm_planning/arm_planning/inflate_bb (double, default: "1.0")
Bounding box of each object inserted into planning scene will be inflated by this factor.
~arm_planning/world_frame (string, default: "map")
Planning will be performed in this coordinate system.
~arm_planning/end_eff_link (string, default: "sdh_palm_link")
End effector link.
~arm_planning/joint_controls (bool, default: "false")
Enable/disable interactive markers for all joints.
~arm_planning/make_collision_objects_selectable (bool, default: "false")
If the object inserted into the planning scene should be selectable or not.
~arm_planning/aco/link (string, default: "arm_7_link")
Artificial collision object (when enabled) will be attached to this link.
~arm_planning/aco/default_state (bool, default: "false")
Default state of the artificial collision object.
~spacenav/enable_spacenav (bool, default: "true")
Enables usage of Space Navigator control.
~spacenav/use_rviz_cam (bool, default: "true")
Enables usage of RVIZ camera position to make control of end effector in user perspective. Camera position must be published as TF transformation. There is plugin for publishing this in srs_env_model_ui package.
~spacenav/rviz_cam_link (string, default: "rviz_cam")
TF frame for RVIZ camera.
~spacenav/max_val (double, default: "350.0")
Maximal value for data from Space Navigator. Higher values will be limited.
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 71 of 128
~spacenav/min_val_th (double, default: "0.05")
Threshold for normalized values (current / max_val). Must be in <0.0, 0.5> range.
~spacenav/step (double, default: "0.1")
Step for changes of end effector interactive marker position.
~arm_links (list of strings, default: "cob3-3 arm links")
List of arm links.
~set_planning_scene_diff_name (string, default: "environment_server/set_planning_scene_diff")
Service for communication with Environment Server.
~left_ik_name (string, default: "cob3_arm_kinematics/get_constraint_aware_ik")
Constraint aware IK service.
~planner_1_service_name (string, default: "ompl_planning/plan_kinematic_path")
Planner service name.
~trajectory_filter_1_service_name (string, default: "trajectory_filter_server/filter_trajectory_with_constraints")
Trajectory filter service name.
~vis_topic_name (string, default: "planning_scene_visualizer_markers")
Topic for publishing visualization markers.
~left_ik_link (string, default: "arm_7_link")
End effector link for which we perform IK.
~left_arm_group (string, default: "arm")
Arm group.
~execute_left_trajectory (string, default: "arm_controller/follow_joint_trajectory")
Arm controller action.
3.3.16 ASSISTED ARM NAVIGATION UI (SRS_ASSISTED_ARM_NAVIGATION_UI)
Overview
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 72 of 128
Package contains RVIZ plugin which serves as user interface for srs_assisted_arm_navigation, plugin for assisted detection (bounding box estimation) and arm navigation state visualizer.
Assisted arm navigation user interface
Subscribed Topics /but_arm_manip/state (srs_assisted_arm_navigation_msgs/AssistedArmNavigationState)
A state of arm navigation.
Services /but_arm_manip/arm_nav_start (srs_assisted_arm_navigation_msgs/ArmNavStart)
Called from assisted arm nav. node to enable controls and inform user about requested action.
Parameters ~wait_for_start (bool, default: "true")
If true, controls in plugin are disabled by default so, user can't start planning anytime.
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 73 of 128
Presets for goal positions can be defined in yaml file. Here is the example with two presets, where first preset is relative position and second absolute. Orientation can be specified as RPY (three values) or as quaternion (four values). Presets are displayed in the user interface.
arm_nav_presets:
-
name: Lift object
position: [0.0, 0.0, 0.15]
orientation: [0.0, 0.0, 0.0]
relative: true
-
name: Hold position
position: [-0.223, 0.046, 0.920]
orientation: [0.020, 0.707, -0.706, 0.033]
relative: false
Assisted arm navigation state visualizer
Node for publishing text marker indicating state of arm planning - tells user if the arm is in collision or the requested position is not reachable etc. It needs rviz_cam_add TF frame published by assisted arm navigation node to visualize text marker near the end effector when planning.
Subscribed Topics /but_arm_manip/state (srs_assisted_arm_navigation_msgs/AssistedArmNavigationState)
Current state of planning.
Published Topics /but_arm_manip/state_vis (visualization_msgs/MarkerArray)
Marker with text informing user about state of current motion request.
Assisted detection (bounding box estimation)
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 74 of 128
RVIZ plugin for assisted object detection. Actionlib interface can be called to get ROI of object in image stream. This ROI can be then converted to 3D bounding box using BB estimator.
Subscribed Topics bb_video_in (sensor_msgs/Image)
Video stream.
Parameters ~is_video_flipped (bool, default: "true")
If true, plugin will flip video horizontally.
Action Subscribed Topics
/manual_bb_estimation_action/goal (srs_assisted_arm_navigation_msgs/ManualBBEstimationGoal)
Ask user to select ROI of object.
/manual_bb_estimation_action/cancel (actionlib_msgs/GoalID)
A request to cancel given task.
Action Published Topics
/manual_bb_estimation_action/feedback (srs_assisted_arm_navigation_msgs/ManualBBEstimationFeedback)
Feedback contains the current selected ROI.
/manual_bb_estimation_action/status (actionlib_msgs/GoalStatusArray)
Provides status information on the goal.
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 75 of 128
/manual_bb_estimation_action/result (srs_assisted_arm_navigation_msgs/ManualBBEstimationResult)
Result contains final ROI.
3.3.17 ASSISTED GRASPING (SRS_ASSISTED_GRASPING)
Overview
Assisted grasping has been developed to allow safe and robust grasping of unknown or unrecognized objects by SDH gripper equipped with tactile sensors. It has separated API definition (actionlib interface), code and GUI in form of RVIZ plugin.
When calling grasp action there is possibility to specify target configuration (angles) for all joints of the gripper, grasp duration and maximum forces for all tactile pads. Then, for each joint, velocities with acceleration and deceleration ramps are calculated in a way, that all joints will reach target configuration at the same time. If the measured force on a pad will exceed requested maximal force, movement of a corresponding joint will be stopped. With different target configurations and appropriate forces, a wide range of objects can be grasped.
Assisted grasping package also offers node for preprocessing of tactile data by median and Gaussian filter with configurable parameters and node for simulation of velocity mode of SDH.
Grasping node Topics, parameters Subscribed Topics tact_in (schunk_sdh/TactileSensor)
Data for tactile feedback.
state_in (pr2_controllers_msgs/JointTrajectoryControllerState)
Data for position feedback.
Published Topics velocity_out (brics_actuator/JointVelocities)
Velocity commands.
Parameters ~max_force (double, default: "1000.0")
Maximal allowed force. Higher values will be limited to this value.
~max_velocity (double, default: "1.5")
Maximal velocity (rad/s).
~a_ramp (double, default: "10.0")
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 76 of 128
Length of acceleration ramp in percents of whole trajectory.
~d_ramp (double, default: "20.0")
Length of deceleration ramp in percents of whole trajectory.
~rate (double, default: "20.0")
Rate of internal controller. Also commands are published at this rate.
Node also requires list of joints with some additional information. This can be done easily in YAML file:
joints:
-
joint: sdh_knuckle_joint
has_tactile_pad: false
static: true
-
joint: sdh_thumb_2_joint
has_tactile_pad: true
static: false
-
joint: sdh_thumb_3_joint
has_tactile_pad: true
static: false
-
joint: sdh_finger_12_joint
has_tactile_pad: true
static: false
-
joint: sdh_finger_13_joint
has_tactile_pad: true
static: false
-
joint: sdh_finger_22_joint
has_tactile_pad: true
static: false
-
joint: sdh_finger_23_joint
has_tactile_pad: true
static: false
Actionlib API Action Subscribed Topics
/reactive_grasping_action/goal (srs_assisted_grasping_msgs/ReactiveGraspingActionGoal)
Target configuration of joints, maximal allowed force for each tactile pad and time to reach target configuration.
/reactive_grasping_action/cancel (actionlib_msgs/GoalID)
A request to cancel given goal.
Action Published Topics
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 77 of 128
/reactive_grasping_action/feedback (srs_assisted_grasping_msgs/ReactiveGraspingActionFeedback)
Feedback is empty.
/reactive_grasping_action/status (actionlib_msgs/GoalStatusArray)
Status.
/reactive_grasping_action/result (srs_assisted_grasping_msgs/ReactiveGraspingActionResult)
Current joint values, forces and real time to stop for all joints.
3.3.18 ASSISTED GRASPING UI (SRS_ASSISTED_GRASPING_UI)
Overview
Package contains user interface in form of RVIZ plugin (ROS Electric) for srs_assisted_grasping.
When doing grasping, the user can select object category and then press Grasp button. Categories are predefined in configuration file. Each category has a name, target configuration of SDH joints and desired forces for all fingers. In the UI, user can see names and corresponding maximum forces. After executing the grasp, user can decide if it was successful using tactile data visualization.
3.3.19 ENV MODEL (SRS_ENV_MODEL)
Overview
This package provides a new Dynamic Environment Model server partially based on OctomapServer from the octomap_mapping stack. The model manages information about all detected, permanent,
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 78 of 128
temporary and moving objects in the environment. It publishes own topics, subscribes to some of input data stream topics and it's internal state can be controlled by services. We have created some simple tutorials that walk you through basic using it.
System architecture
The Dynamic Environment Model (DEM) serves as a node containing octomap of the scene viewed by sensors and additional data types storing detected objects.
Environment model server is based on core-plugin architecture model. Whole this structure can be virtually divided into two parts - plugins connected to the octomap and independent data structures. The first part serves as a "raw data" processor. It tranforms input octomap to the different data structure. The second part of the environment server contains some "higher level" data structures, added by some detectors in general.
Raw data structures o Octo map o Collision map o Collision grid o Collision objects o 2D map o Marker array o Point cloud
Other data structures o Object tree
Other plugins o Compressed point cloud plugin o Example plugin o Limited point cloud plugin
Octo map plugin
Contains octomap data - octomap tree with own node type (octomap::EModelTreeNode). Can be subscribed to the point cloud publishing topic to fill the map. This plugin is used by some other plugins as a data source.
The octomap plugin also supports RGB colouring of the point cloud based on the data from RGB camera. Other provided functionality is octree filtering used to remove speckles and faster noise removal in the view cone.
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 79 of 128
Collision map plugin
This plugin converts occupied octomap nodes to the collision map (arm_navigation_msgs::CollisionMap). Diameter can be set to limit robot collision distance (this leads to smaller data flow).
Collision grid plugin
This plugin also converts occupied octomap nodes to the other form - occupancy grid (nav_msgs::OccupancyGrid). Used tree depth, map x and y direction sizes can be set.
Collision objects plugin
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 80 of 128
It converts and publishes occupied octomap nodes to the collision objects (rm_navigation_msgs::CollisionObject).
Compressed point cloud plugin
This plugin takes robot camera position and uses it to filter currently visible part of octomap. This is converted to the point cloud form and it is sent as differential frame. The compressed_pc_publisher node can be used to combine differential frames and to publish complete octomap again (in the usuall point cloud form). This indirectness can be used to lower data flow.
2D map plugin
It converts and publishes octomap nodes to the 2D map (nav_msgs::OccupancyGrid).
Marker array plugin
It converts and publishes octomap nodes to the marker array (visualization_msgs::MarkerArray).
Point cloud plugin
It can convert incomming point cloud (sensor_msgs::PointCloud2) frame id to some other and re-publishes it.
Object tree plugin
This plugin is used to store objects detected by various perceptors. These objects are stored in octree structure and they are represented by their position and other needed parameters instead of point cloud. Objects can be visualized with srs_interaction_primitives package.
Currently plugin supports bounding boxes and planes detected by tools in the srs_env_model_percp package. All plugin functionality is provided by services.
Compressed point cloud plugin
This plugin publishes only visible part of the environment model. This cloud can be used as live sight on octomap data or can be used together with compressed point cloud publisher to transfer octomap to the other device.
Compressed point cloud publisher node
This node can be used to assembly and publish complete octomap from differential frames published by compressed point cloud plugin.
ROS API Nodes
Node Name Published Topics Description
but_server_node /but_env_model/binary_octomap /but_env_model/pointcloud_centers /but_env_model/collision_object /but_env_model/markerarray_object
Main server node.
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 81 of 128
/but_env_model/map2d_object /but_env_model/collision_map
compressed_pc_publisher /input Differrential frames completion and complete octomap publishing
Services
All services are published in the /but_env_model/ namespace.
Service Name Input Output Description
server_reset Reset server
server_pause Pause server
server_use_input_color bool Set true (default) if input color information should be stored and used.
get_collision_map int32 - local version stamp id
arm_navigation_msgs/CollisionMap
Get current collision map.
insert_planes srs_env_model_msgs/PlaneArray plane_array
Add array of planes
reset_octomap Reset only octomap data.
load_octomap string - input file name bool - true if all ok Tries to load octomap file (without header) as a new env. model
load_octomap_full string - input file name bool - true if all ok Tries to load octomap file (with header) as a new env. model
save_octomap string - output file name bool - true if all ok Tries to store current env. model as octomap file (without header)
save_octomap_full string - output file name bool - true if all ok Tries to store current env. model as octomap file
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 82 of 128
(with header)
add_cube_to_octomap geometry_msgs::Pose - cube position, geometry_msgs::Pose - cube size
Set all cells inside given box as occupied
remove_cube_from\n_octomap
geometry_msgs::Pose - cube position, geometry_msgs::Pose - cube size
Set all cells inside given box as free
lock_collision_map bool - true if lock Lock collision map - map will not be updated from octree. Should be called before manual modifications.
is_new_collision_map time - time stamp of local data
bool - true if new data present, time - timestamp of new data
Service can be used to test if new collision map data is present
add_cube_to_collision_map geometry_msgs::Pose - cube position, geometry_msgs::Pose - cube size
Set all collision maps cells inside given box as occupied
remove_cube_from\n_collision_map
geometry_msgs::Pose - cube position, geometry_msgs::Pose - cube size
Set all collision map cells inside given box as free
set_crawl_depth uint8 - tree depth Set tree depth used for publishing
get_tree_depth uint16 - tree depth Get octomap tree depth
Object tree plugin services
Services provided by Object tree plugin can be divided into two parts. The first one is common for all saved objects. Services in the other one have variants for all supported objects. All services are published in the /but_env_model/ namespace.
Service Name Input Output Description
get_objects_in_box geometry_msgs/Point position
uint32[] object_id
Returns ids of objects inside a box.
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 83 of 128
geometry_msgs/Vector3 size
s
get_objects_in_halfspace
geometry_msgs/Point position
geometry_msgs/Vector3 normal
uint32[] object_ids
Returns ids of objects inside a halfspace defined by plane.
get_objects_in_sphere geometry_msgs/Point position
float32 radius
uint32[] object_ids
Returns ids of objects inside a sphere.
remove_object uint32 object_id Removes object from object tree.
show_object uint32 object_id Shows object using a srs_interaction_primitives server.
show_objtree Show octree structure using a Marker.
The following services are available for these object types: plane, aligned_box and bounding_box. All services are published in the /but_env_model/ namespace.
Service Name Input Output Description
get_{object_type} uint32 object_id object_description Gets object from object tree.
get_similar_{object_type} object_description uint32 object_id Checks object tree for object similar to object in input parameter. Returns -1 if no such exists.
insert_{object_type} object_description uint32 object_id Inserts object into object tree. Replaces object with same id if exists.
insert_{object_type}_by_position object_description uint32 object_id Inserts object into object tree. Replaces similar object if exists.
Messages
srs_env_model/OctomapUpdates
header uint8 - 0 if this is partial frame sensor_msgs/CameraInfo - robot camera position used in frame sensor_msgs/PointCloud2 - frame data
Partial frame message used in compressed pointcloud plugin.
Published topics
List of all published topics. All topics are published in the /but_env_model/ namespace.
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 84 of 128
Topic Name Message Description
visualization_marker visualization_msgs/Marker Visualization markers
collision_object arm_navigation_msgs/CollisionObject Octomap data - occupied nodes - as a collision objects
pointcloud_centers sensor_msgs/PointCloud2 Octomap data - occupied nodes - as a point cloud
collision_map arm_navigation_msgs/CollisionMap Diameter limited octomap converted to the collision map
marker_array_object visualization_msgs/MarkerArray Octomap data as a marker array
map2d_object nav_msgs/OccupancyGrid Whole octomap converted to the 2D map
binary_octomap octomap_ros/OctomapBinary Binary octomap data - whole tree
visible_pointcloud_centers sensor_msgs/PointCloud2 Octomap data - occupied nodes visible from the rviz camera - as a point cloud
3.3.20 UI BUT (SRS_UI_BUT)
Overview
The srs_ui_but stack contains tools used for information visualization in RVIZ and interactive manipulation with robot components.
System architecture
It consists of 4 components:
but_display but_data_fusion but_ogre_tools but_services
BUT display
This set contains additional displays and visualizers for RVIZ. The displays can be also used as an example code for display writing (but_example_pane, but_display) and as an example of RVIZ camera tracking and casting (but_camcast).
Other set of components are visualizers:
Distance Linear Visualizer
This tool visualizes distances between specified robot's link and the relevant closest point.
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 85 of 128
Distance Circular Indicator
This tools shows set of circles in particular distances around specified robot's link.
COB Stretch Indicator
This tools visualizes stretch of the Care-O-bot. It also allows to show robot's bounding box
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 86 of 128
Camera Display
Rviz display type derrived from Map Display. It awaits messages of type srs_ui_but::ButCamMsg (published by node view) with position and size of camera view polygon. It also needs topic with video stream subscribed. Video is used to draw the frames over the rectangle as its texture.
Object Manager
This tool allows to add objects stored in the srs_object_database. Objects are visualized as Object and can be moved or rotated.
Object Manager also provides service which "detects" all added objects.
Data fusion
This component experiments with 3D scene and video stream fusion, used in visualization of scene around Care-O-bot. So far there's one possibility to visualize point cloud and video from camera in one 3D view implemented. It shows view frustum in front of the robot visualized with standard Markers, and also rectangular polygon inside of this view frustum dynamically textured with current video frame.
Data fusion consists of two main parts. First is node called view. This node needs to be running to succesfully visualize data fusion as it counts both view frustum and camera view polygon position and orientation in time. The other is new display CButCamDisplay described among displays.
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 87 of 128
ROS API
...brief description of provided ROS services, published topics, etc...
Nodes
An overview of all newly created ROS nodes.
Node Name Published Topics Description
but_services_server This node publishes but services.
cob_stretch_publisher This node publishes COB's actual stretch.
view /but_data_fusion/view_frustum /but_data_fusion/cam_view
Counts both view frustum and camera view polygon position and orientation in time.
Services
List of all new services.
Service Name Input Output Description
/but_services/get_closest_point
string link ClosestPoint closest_point_data
This service calculates closest point between specified robot's link and actual point cloud.
/but_services/set_point_cloud_topic
string topic Sets the topic from which the closest point should be calculated.
/but_services/get_added_objects
* cob_object_detection_msgs/DetectionArray object_list
"Detects" all objects added by Object Manager.
Messages
List of all newly defined messages.
Msg Name Content Description
ClosestPoint time time_stamp bool status float32 distance geometry_msgs/Vector3 position
Closest point data (distance and position of the closest point).
ButCamMsg Header header geometry_msgs/Pose pose geometry_msgs/Vector3 scale
Reduced Marker message, covering all needs of CButCamDisplay.
COBStretch float32 radius float32 height time time_stamp
Data with calculated robot's stretch.
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 88 of 128
Published topics
List of all published topics.
Topic Name Message Description
/but_data_fusion/view_frustum visualization_msgs::Marker Line list representing view frustum
/but_data_fusion/cam_view srs_ui_but::ButCamMsg Geometry of camera view polygon used by CButCamDisplay display in Rviz.
/but_services/cob_stretch srs_ui_but::COBStretch COB's stretch
3.3.21 USER TEST PACKAGE (SRS_USER_TESTS)
Preparation Create a static 2D map of the environment
record map to srs_user_tests package
roscd srs_user_tests/ros/config/ipa-apartment
rosrun map_server map_saver
Create initial environment model
start navigation with new map
roscd srs_user_tests/ros/config/ipa-apartment
roslaunch cob_navigation_global 2dnav_ros_dwa.launch map:=$(pwd)/map.yaml
record bag file on pc2 (TODO: Check if topics are correct)
rosbag record -O /tmp/srs_setup.bag /tf /cam3d/depth_registered/points
Create environment model for BUT version rosbag play /tmp/srs/srs_setup.bag
roslaunch srs_env_model but_envmodel_robot.launch
Save map
roslaunch srs_user_tests save_octomap.launch
roscd srs_user_tests/data
mv saved_octomap.enm ipa-apartment/octomap.enm
Load map
roslaunch srs_user_tests load_octomap.launch
Create environment model for IPA version rosbag play /tmp/srs/srs_setup.bag
roslaunch cob_3d_mapping_pipeline mapping.launch
rosrun cob_3d_mapping_point_map trigger_mapping.py <start|stop>
Save map to bag file
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 89 of 128
rosrun cob_3d_mapping_geometry_map get_geometry_map /tmp/srs/geomap.bag
roscd srs_user_tests/data/ipa-apartment
cp /tmp/srs/geomap.bag .
Load initial map from bag file
roscd srs_user_tests/data/ipa-apartment
roslaunch cob_3d_mapping_geometry_map set_geometry_map.launch
file_path:=geomap.bag
Visualization
rosrun rviz rviz
Start user tests On robot or simulation machine
Start srs components on robot
roslaunch srs_user_tests run_test.launch exp:=e2 task:=man2 cond:=a
sim:=false log:=true id:=p1
exp o Experiment.
task
o Code of the task.
cond
o Condition or subtask.
sim
o Set sim to true for simulated scenario, otherwise omit this argument or set to false.
log
o When setted to true, logs will be recorded (to srs_user_tests/data/logs/p1/e2/man2/a for given example) on the robot. Topics for logging are specified separately for each task in configuration directory.
id
o Argument id can be arbitrary string and it can be used to tag test/participant.
On remote pc roslaunch srs_user_tests run_test_remote.launch exp:=e2 task:=man2 cond:=a
sim:=false log:=true id:=p1
Arguments exp, task, cond, sim and id have the same meaning for both launch files. Just id has slightly different function:
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 90 of 128
log o When setted to true, screenshots will be recorded (to
srs_user_tests/data/logs/p1/e2/man2/a for given example) on the remote machine.
3.4 CONCLUSIONS
With the described functionality and implementation state of the hardware and software components, the SRS test cases, which had been defined for the technology evaluation, were performed. The tests results show that the system is stable and robust enough to observe the system running.
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 91 of 128
4 PROOF OF PERFORMANCE
The performance of the system has been evaluated by several user tests or integration system simulation. After each user evaluation, the system was updated based on the latest tests. The final system has been checked by the both user test in Stuttgart, Germany on February 2012, and the COB manipulation SRS grasping tests within SRS scenario. The final COB manipulation SRS grasping tests within SRS scenario is summarized below:
4.1 OBJECTIVE
This test aim to test all components involved in the simulation. Target 1: Robot completes the task by itself, or user intervention from UI_PRO is triggered Target 2: Robot completes the task without any help
4.2 TEST PROCEDURE
The milkbox is placed in the ipa-kitchen on a table. The robot is supposed to grasp it and put it on the tray. The arm navigation is based on IPA planned arm navigation. The grasp is based ROB srs_grasping using openrave The integration and user intervention logic are based on srs_decision_making The test is repeated for 10 times The command which was used for the test is attached below: #start the COB simulation and srs softwares roslaunch srs_scenarios srs_scenario_planned_grasp.launch #start IPA arm planning enviroment roslaunch cob_arm_navigation start_planning_environment.launch #place the milkbox in kitchen roslaunch srs_scenarios milk_kitchen.launch #start the manipulation rosrun srs_decision_making test_client_json.py
4.3 FIRST TEST AND ANALYSIS
The results of the first test are shown on Table 3.
TABLE 3 COB MANIPULATION SRS GRASPING TEST REULTS
Test Target 1 Target 2 Note
1.1 Succeeded: The milkbox was not grasped by the robot, but the user intervention function was successfully triggered for manipulation based on UI_PRO
Failed: The simulation console shows the milkbox was grasped by the robot, however, from the feedback of tactile sensor, the object is not held, and the task execution was terminated.
See log file of test 1.1.
1.2 Succeeded: The milkbox was grasped by the robot successfully.
Failed: The milkbox was grasped by the robot successfully. During task execution of “put on tray”, the milkbox was dropped accidentally. Probably due to an unexpected finger collision
Check the milkbox weight. Adjust the power of gripping of the robot.
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 92 of 128
before the grasp; and the milkbox is not held in the expected position.
1.3 Succeeded Succeeded
1.4 Failed: The test was failed, User intervention was not triggered after a failed execution see next column. =>
Failed: The robot cannot correctly detect if there is an object on its hand. The milkbox was dropped during the task execution, but the robot was still executing the “put on tray” action, and the output was “task_succeeded”.
See log file of test 1.4.
1.5 Succeeded Succeeded
1.6 Succeeded: The milkbox was not grasped by the robot, but the user intervention function was successfully triggered.
Failed: The robot arm did not reach the right position of the milkbox. Consider the detection process was fake in simulation, so most likely the problem is from the ik calculation or arm navigation This symptom is very similar to what we experienced in Milan
See log file of test 1.6.
1.7 Failed: The test was failed, User intervention was not triggered after a failed execution see next column. =>
Failed: During the test, the robot needs to hold the milkbox using its hand. However, there is a time difference between different fingers touching on the milkbox. The first finger which touched the box, may push the box and change its position. As a result, the other fingers did not hold the box as calculated. And the Milkbox slipped out of the SDH eventually..
The movement of the figure needs to be re-planned. The real hand of COB is not as rigid as the one simulated in the Gazebo, therefore, real performance can still be better.
1.8 Failed: The test was failed, User intervention was not triggered after a failed execution see next column. =>
Failed: Same as above. Same as above.
1.9 Failed: The test was failed, User intervention was not triggered after a failed execution
Failed: Same as above. Same as above.
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 93 of 128
see next column. =>
1.10 Succeeded: The milkbox was not grasped by the robot, but the user intervention function was successfully triggered.
Failed: The simulation console shows the milkbox was grasped by the robot, but the log file says it was failed, and the task execution was terminated.
See log file of test 1.10.
This test aim to test all components involved in the simulation. From the first test result we could see, (1) Move Arm Planned, Pre-grasp and Post-grasp have been working as expected. There was no
collision on table during the test. There is only one issue with planned move arm as shown in 1.6.
(2) The successful rate of Target 1 is 60%. The reason is the software only check the feedback from tactile sensor once, which is done immediately after grasp. From the test result we could draw the conclusion that all existing failure cases could be avoided if one or two more checks about the feedback of the tactile sensor.
(3) The successful rate of Target 2 is only 20%. As discussed in 1.7, the problem is due to the fingers are not touching the milkbox at the same time, which means the first finger which already touched would push the milkbox a bit then the other two fingers couldn’t reach the expected positions.
4.4 FINAL TEST
Based on the first test results and the analysis, the system has been updated as follow: 1) Aim to improving the success rate of Target 1 to 100% by additional checks on tactile sensor and tray sensor
2) Aim to improving the success rate of Target 2 to 60% by confirming the problem is actually happen in the real robot and then adjusting the finger movement.
After the system update, the final test has been carried out by COB manipulation SRS grasping tests within SRS scenario. The final result has been summarized in the Table below.
Target No. Target Description Aim of success rate Actual success rate
1 Robot completes the task by itself, or user intervention from UI_PRO is triggered
100% 100%
2 Robot completes the task without any help
60% 80%
4.5 SUMMARY
From the final test result above we could see, the expected performance of SRS system has been approved. The success rate of the task completion of the robots, which including both the robot self-completion and the user intervention, is 100%, which indicated the designed system is highly reliable. The successful rate of the task completion by the robot itself is 80%, which also exceed the designing aim of the performance of this system. Due to the limitation of the robot task performance and the occurrences of unexpected situations, occasionally the robot was unable to complete its task successfully, which shows the importance of the semi-autonomous robotic solution. The human intervention and guide will guarantee the robot to complete the task in case the failure of the autonomous robotic behaviour.
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 94 of 128
5 PROOF OF USABILITY 5.1 INTRODUCTION
The poof of usability is based on user validations and further system integration tests. After each on-site user validation test, the suggestions for user interface were summarised. After the system update with the user interface, further system integration tests and user validation were held to validate the updated UI.
5.2 PROOF OF USABILITY FOR LOCAL USERS
The usability test for local users has been done for three times summarised in the below:
Time Location Participants Procedure
1 Jan 2012 San Sebastian, Spain
14 patients:10 woman and 4 men mean age :79.14 maximum age: 92 minimum age: 60
The system caught a cocoa storage jar and delivered to the user
2 Feb 2012 Stuttgart, Germany
two elderly people : 1 female, age 80; 1 male, age 81
Scenario 1: An elderly person was sitting on the couch and used a handheld interaction device to make the robot fetch a book from a locker in the dining room. However, the robot failed at executing the task (this failure was planned / simulated) because a stool hindered appropriate path planning for delivering the object to the user. Therefore, a remote operator (located in another apartment) was called and remotely navigated the robot to deliver the book.
Scenario 2: An elderly person was in bed and the robot fetched a medicine box from the window sill in the kitchen.
3 May 2012 Milan, Italy 16 elder: 11 female and 5 male mean age :79.6 maximum age: 96 minimum age: 71 mean education: 2.3 1 young disable man age: 23 Barthel index: 33/100, education: 3
Elderly improving their Autonomy at home:Elderly people alone at home, sitting on the sofa using UI-LOC autonomously send the robot to bring a medicine box located on a shelf in the corridor.
Based on the user test result and suggestions, UI_LOC have been updated and has been proved by integration test and user validations, which was performed twice in Stuttgart, Germany on Nov 2012 and Feb 2013, respectively. The update and evaluation results are summarized shown in the Table below:
TABLE 4 SRS UI_LOC UPDATE AND EVALUATION
PROBLEM SUGGESTIONS FROM USER TEST
IMPLEMENTED (YES/NO)
TESTING RESULT
Insufficient, prolonged or inaccurate pressure on the touch keys.
Traditional button at least for emergency?
Partially Improved touch
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 95 of 128
Voice control addition? behave from the device
Too low call volume and speaker volume for many elderly having hearing problems
Adding vibration option?
Yes Vibration has been added to the device
Font size not big enough for some of the elderly people with vision problems
Vocal menus and “screen reader”?
Yes Screen reader has been integrated
Illiterate people can’t read text Adding icons wherever it’s possible?
Yes Text to voice conversion has been integrated and functioning
On the Galaxy device, when you receive a call, you have to activate the “speaker” (a small icon on the phone) an extra and complex operation.
Yes The complex action has been avoided
“Go back” button is developed inside the UI_LOC app, but on the Galaxy device (an invisible icon on the phone, that appears only when you passes the finger on)
Back button integration on the UI_LOC software?
Yes The back button has been integrated in the software
Skype not well integrated into the UI_LOC app. When you make a call, the calling (including emergency call) doesn’t start autonomously, but it opens the skype contacts, and you have to make a long sequence of actions to reselect the right contact and then finally make the call
Yes Skype has been integrated in the UI_LOC
5.3 PROOF OF USABILITY FOR PRIVATE USERS
The usability test for private users has been done for two times which are summarised in the below:
Time Location Participants Procedure
1 Feb 2012 Stuttgart, Germany
remote operator (grandchild of the two elderly people, female, age 30)
control the robot from another apartment upstairs in the same house
2 May 2012 Milan, Italy 12 private caregivers: 6 males, 6 females mean age :54.1 maximum age: 66 minimum age: 31
Private caregivers monitoring situation and
remotely assisting:Private caregiver calls the assisted elderly people to check the situation at home. He/she realizes that the assisted forgot to take the medicine. The elderly
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 96 of 128
mean education: 3.5
accepts the help of the caregiver. The caregiver using UI_PRI sends the robot to bring a medicine box located on a shelf in the corridor, in the mean time the elderly and the caregiver keep talking, and the caregiver got also visual feedback of what the robot is doing and was it is going.
Based on the user test result and suggestions, UI_PRI have been updated and has been proved by integration test and user validations, which was performed twice in Stuttgart, Germany on Nov 2012 and Feb 2013. The update and evaluation results are shown in the Table below:
TABLE 5 SRS UI_PRI UPDATE AND EVALUATION
SRS UI_PRI Update and evaluation
PROBLEM SUGGESTIONS FROM USER TEST
IMPLEMENTED (YES/NO)
TESTING RESULT
Not all items were translated in Italian language, it is necessary to integrate the possibility to change language
YES UI_PRI can read the suitable language directly from the knowledge base
Intuitive way to use manual navigation but not sufficient precision (e.g. moving the robot through a door). The remote control is difficult having only a 2D map displaying an icon of the robot.
NO The function is implemented with UI_PRO using mixed reality and VR. This is not in the scope of UI_PRI yet
Skype not well integrated into the UI_PRI app ( It is difficult to switch between the skype and srs application in UI_PRI)
NO Limited by the Apple IOS
UI_PRI video stream from the robot stopped working at several occasions. Restart was needed.
YES Network problem. Not happing in our test
UI_PRI –information about object on the tray is not provided
YES New on tray service is tested and working
The status messages displayed on UI_PRI are confusing and not informative for the targeted user
YES New state information based on the json communication
Too poor info is provided on the apartment map,
Partially Improved map on UI_PRO
A user could generate illogical tasks or set inconsequential action sequences using the sliding windows, which may cause the robot stop responding.
The wrong input is necessary to be detected on the client side. After catching the error, the
YES The logic on user input has been improved. Impossible
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 97 of 128
interface needs to prompt relevant error messages to the user
commands are filter by the user interface.
The left hand list shows task names, however these names can not explicitly indicate their corresponding actions. If a task “Get + Milkbox” is named as “task_1” during the action adding process, the task name “task_1” will appear on the list. Then, there is no way for the user to check the contained actions of “task_1” after the action adding process. In practice, the user can generate one task with another name, e.g. the task is “Get + Medicine A” but its name is “Get Medicine B”. Using current interface, other users are not able to check the task details, which may cause accident during task execution.
The interface needs to show both task names and their contained actions on the left hand list, instead of showing task names only.
YES The redesign of the interface has been tested
The function regarding task editing is limited. It only allows the user to delete existing tasks.
The usability of this function is necessary to be improved. The interface needs to provide fully functional task editing, which should include task add/delete, task rename, action add/delete, etc.
YES New task can be added in the UI_PRI
After the user clicks a task from the left hand list, there is no clear message appeared on the interface, thus the user cannot be notified as to whether the robot has received the task command. To address this, response from the robot is needed after the user sent commands.
YES Additional feedbacks have been tested and working
During the task execution process, no real-time feedback is shown on the interface. There is no way for the user to know the task execution progress, even the task has failed.
In reality, the user needs to make decisions and monitor the robot based on the feedback. It is necessary to display the feedback on the interface during the task execution. The feedback is able to be obtained from DM, and it should include the information regarding current action, last action, action feedback, action name, task name, etc. For the interface developer, the source code of
YES Additional feedbacks have been tested and working
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 98 of 128
“srs_high_level_statemachines.py”, lines 139 – 160 can be a useful reference. In addition, a fixed window on top of the interface for feedback displaying is suggested to use (a similar layout with UI_PRO, please refer it).
During the manual control process, the video from camera 1 and 2 is upside down.
The interface developer needs to add a button that enables the user to adjust nothing t video directions, e.g. “change direction: left to right”, or “change direction: up to down”.
YES A new topic has been displayed for flip over display.
The function of on tray button is unclear. It is probably supposed to display the information of an object on the tray (?). However, it was failed during tests, e.g. a milk box was on the tray, after the button had been clicked, nothing happened. In addition, no matter there is/not an object on the tray, after click the button, nothing will happen. The interface should detect whether there is an object on the tray after a user clicks the button, and then pop up relevant messages to the user, e.g. “There is no object on the tray” or “A milkbox is on the tray”.
YES New service developed for on-tray has been tested and working
Once the user uses manual control, it is impossible for the user to switch manual control to autonomy control. For example, a command is sent by the user, and then a message appears on the screen, i.e. “There is an obstacle on the robot’s path. Do you want to try to move the robot manually”. If the user clicks “Yes”, the control model will switch to manual control, and the user is able to help the robot to get out of stuck. However, the control model after that cannot be switched back to autonomy control. In other words, if the user sends a task from the left hand list afterwards, the robot will not response and execute it.
YES Tested and working
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 99 of 128
5.4 PROOF OF USABILITY FOR PROFESSIONAL USERS
The usability test for professional users has been done for three times as below:
Time Location Participants Procedure
1 Jan 2012 San Sebastian, Spain
13 professional operators: 7 women and 6 men Profession: 7 worked for a tele-assistance company and 6 for Matia - Ingema hospital Type: medium-aged adults mean age= 31.08
The participants use the system in an emulated natural scenario of use. Task: grasping; Leaving; Moving scenario
2 May 2012 Milan, Italy 5 professional operators: 1 female and 4 males mean age: 33.2
Professional operators managing emergency: Elderly feeling bed presses the emergency button. The professional operator is immediately contacted. The elderly states the immediately needs of the medicine. The professional operator using UI_PRO sends the robot to bring a medicine box located on a shelf in the corridor, in the meantime the elderly and the professional remote operator keep talking, and the professional operator has also feedback through the robot simulator of the robot movements and robotic arm.
Based on the user test result and suggestions, UI_PRO and UI_BUT have been updated and have been proved by integration tests and user validations, which were carried twice in Stuttgart, Germany on Nov 2012 and Feb 2013, respectively. The update and evaluation results are shown in the Table below:
TABLE 6 SRS UI_PRO/UI_BUT UPDATE AND EVALUATION
PROBLEM SUGGESTIONS FROM USER TEST
IMPLEMENTED (YES/NO)
TESTING RESULT
Data are not shown in real time (or at least in useful time)
YES Reduced Kinect frame (1-3 fps) and differential transfer of the voxel map to the UI_PRO PC provides satisfactory refresh rate of the display.
Interface is not user friendly even for engineers; it is difficult to be understood without a long training.
YES Simplified UI performed exceptionally well. No user was unable to complete a task and overall questionnaire ratings are very positive.
Bandwidth problems do not allow the synchronous use of video and virtual reality considered mandatory
YES Video and simultaneous 3D user interface fully implemented.
It’s not possible to see details of the robotic arm and to properly control it (avoiding hurts for example)
YES Average user rating for correct display of arm position (in questionnaire after evaluation): score 6.1 out of 7.
It is not possible to manually YES Manual navigation (using
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 100 of 128
control the robot, only autonomous mode is working at the moment.
SpaceNavigator or in-scene control) and manual (semi-autonomous) manipulation are fully implemented.
The map visualization is too poor, more details of the house are needed.
YES Two 3D mapping pipelines are available now: BUT’s voxel map and IPA’s geometric map. Both the maps were used and tested during the user tests.
Virtual reality must be improved; 3d vision is not complete (in the map you see the robot 3D, while map is 2D, so there are no references related to height of furnitures)
YES Full 3D user interface is implemented.
There is no way to teach the robot new objects
Workaround implemented
With COB, it needs to be done on site manually as the object needs to rotate. However, there is a way to grasp an unknown object – the assisted grasping plug-in (predefined grasp types - rounded, etc. and feedback from tactile sensors).
Two different screens split attention while performing actions.
One screen to avoid divided attention / simplify multiple-screen solution.
NO There will be two screens: ROB’s UI_PRO and BUT Rviz. but in most cases, using the BUT Rviz screen will be sufficient.
The users focus their attention on environment-simulation screen and not on the movement pad screen, producing inaccurate movements.
YES Although the problem description is obscure, no such problem occurred.
The robot-position map does not provide useful information to the user.
The environment simulation screen should give more detailed spatial information.
YES The full robot model, 2D as well as 3D map, data coming from Kinect, robot FOV, etc. is visualized in RViz. These visualizations give a good overview of the situation.
There is no screen-feedback about the actions performed in the robot-movement plan
Feedback is needed across the different steps
YES In case of the BUT’s assisted arm navigation, the current state of the planning is shown as a text inside the 3D view and shown on the RViz plug-in panel
The interface is not intuitive.
The buttons should be placed in relevant places in the screen. All the actions of the same level at the same place, icons instead of buttons, text boxes or windows with explanations, etc.
YES Usability and overall ratings were very high.
Is difficult to understand where the robot is located when starting a movement.
The robot direction should be shown in the screen whenever the user is going to start a movement
YES No such navigation problems occurred in the evaluation.
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 101 of 128
sequence.
It is difficult to know the distance between the robot and the objects.
Apart from collision-avoidance systems, the distances should be constantly notified to the user to prevent collisions.
YES There is an option for enabling a distance indicator.
It is more difficult to move the robot compared to pointing the direction the robot move.
The system could be moved just by pointing the end direction in the map.
YES This feature present (Rviz standard feature).
The environment simulation upper point of view seems to be difficult to understand than the subjective point of view
The system should provide subjective vision
YES Please clarify… according to the many previous research works, the exocentric 3D view of the environment and the robot is the best option to quickly asses the situation… All kinds of views can be adopted in BUT’s UI (egocentric, exocentric). Depending on the situation, the user chooses the appropriate view.
The users prefer to minimize the action options It would be preferable if
the system detects the object automatically than generate a showlist.
YES In BUT’s UI, the main window pane containing the 3D scene and RGB video of the robot’s camera is usually used in maximized mode so no other user interface panels are shown on the screen to improve the user experience.
It’s difficult to know how to speed up or slow down the movement robot.
There should be an speedometer or acoustic signals (i.e. high frequency sound for higher speeds) to inform about the speed of the robot
YES No such issues occurred during remote navigation with SpaceNavigator.
The joystick directions not always coincide with the robot orientation in the environment-simulation screen.
The environment-simulation screen should always move with the robot orientation
YES No joystick / robot orientation issues occurred.
Moving the robot could be done by cursors instead of joystick simulation interface
YES We can use the in-scene teleop that works really good and is quite intuitive
The user does not know when to press “grab”.
There should be feedback about the different grabbing steps. The user should be aware of the precise moment when pressing the grab option; the system should not allow the user to start a grabbing action which outcome is clearly a fail.
YES In case of the BUT’s assisted grasping, buttons are enabled only when you can use them…
It is difficult to know in which There should be some YES Arm movement is shown
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 102 of 128
arm movement step is the robot.
visual feedback about the movements.
precisely in 3D and in realtime in BUT’s UI.
The users press the grab button too early knocking off the object.
When the hand is closed in advance by error the robot movement should be stopped automatically.
YES In BUT’s UI, the user can only press “grasp” button after the trajectory has finished executing.
The hand should be able to be opened again.
YES Can be done in BUT’s UI.
The robot takes too much time to start performing the action.
The system should give feedback to the user about the robot starts to perform actions. “Please wait” message.
YES No such issue occurred in the evaluation of BUT’s UI. After user has simulated a trajectory, he presses “Execute”. The wait time there is very low (<1sec) and no user complained about it.
The movement performance of the hand does not take into account the object (the opened tins content could be spilled) The movement should
take into account the object automatically, or ask the user for movement safety degrees.
NO The assisted arm navigation takes into account collisions against the environment and the object when planning the trajectory. However, the robot does not adjust its movement speed based on the delicacy of the object. This is not a UI issue, however, but an AI, knowledge, or control issue.
Feedback is needed in the arm movement sequence.
YES As noted above, realtime feedback and 3D visualization of the arm is implemented.
6 CONCLUSIONS
The system evaluation and tests are described in this report. To conclude, the system is functionally working well for the purpose of basic use by elderly people. The system is highly accepted and perceived interesting with the potentially use. The user interface is friendly and has high learnability. However due to the limitation of the robot task performance and the occurrences of unexpected situations, occasionally the robot was unable to complete its task successfully, which shows the importance of the semi-autonomous robotic solution. The human intervention and guide will guarantee the robot to complete the task in case the failure of the autonomous robotic behavior.
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 103 of 128
7 REFERENCES Wessa, P. (2012), Free Statistics Software, Office for Research Development and Education, version 1.1.23-r7, URL http://www.wessa.net/
8 APPENDIX 1-SOFTWARE CONPONENTS USAGE/EXAMPLES
In this section, usage/examples are given for the software components, which have been proved in Section 3: Proof of Technology.
8.1 DECISION MAKING (SRS_DECISION_MAKING)
Writing a simple action client: import roslib; roslib.load_manifest('srs_decision_making')
import rospy
# Brings in the SimpleActionClient
import actionlib
# Brings in the messages used by the SRS DM action, including the
# goal message and the result message.
import srs_decision_making.msg as xmsg
def DM_client():
# Creates the SimpleActionClient, passing the type of the action
# constructor.
client = actionlib.SimpleActionClient('srs_decision_making_actions', xmsg.ExecutionAction)
# Waits until the action server has started up and started
# listening for goals.
client.wait_for_server()
# Creates a goal to send to the action server.
_goal=xmsg.ExecutionGoal()
_goal.action="get"
_goal.parameter="milk"
_goal.priority=0
# Sends the goal to the action server.
client.send_goal(_goal)
# Waits for the server to finish performing the action.
client.wait_for_result()
# Prints out the result of executing the action
return client.get_result()
if __name__ == '__main__':
try:
# Initializes a rospy node so that the SimpleActionClient can
# publish and subscribe over ROS.
rospy.init_node('dm_client')
result = DM_client()
rospy.loginfo('result %s',result)
# print ("Result:" result)
except rospy.ROSInterruptException:
print "program interrupted before completion"
Writing an action client for sending action sequence: client = actionlib.SimpleActionClient('srs_decision_making_actions', xmsg.ExecutionAction)
# Waits until the action server has started up and started
# listening for goals.
client.wait_for_server()
# creates action sequence for the action server
sequence=[]
# Creates a goal to send to the action server.
_goal=xmsg.ExecutionGoal()
_goal.action="move"
_goal.parameter="kitchen"
_goal.priority=0
sequence.append(_goal)
_goal.action="search"
_goal.parameter="milk"
_goal.priority=0
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 104 of 128
sequence.append(_goal)
# Sends the goal to the action server.
for item in sequence:
client.send_goal(item)
# Waits for the server to finish performing the action.
client.wait_for_result()
#check result
if client.get_result()!=3:
return client.get_result()
# Prints out the result of executing the action
return client.get_result()
8.2 KNOWLEDGE BASE (SRS_KNOWLEDGE)
Retrieving information from knowledge base Here are some examples of calling the above /get_things_info services. The knowledge server reads environment information from a parameter server at /srs/language_short. The purpose is to let UIs have readable information in different languages, rather than the object names in the RDF files. Details can be referred to the service definitions, where readableNames[] contains corresponding readable information. Example of service /get_objects_on_map Set the language as English $ roscd srs_environments/param
$ rosparam load dm_simple_english.yaml
To call the service, $ rosservice call /get_objects_on_map ipa-kitchen false
## false to indicate that the geometry information is not
returned
objects: ['MilkBox0', 'Salt0', 'Pringle0', 'Bottle0']
classesOfObjects: ['Milkbox', 'Salt', 'Pringles', 'Bottle']
spatialRelation: ['spatiallyRelated', 'NA', 'spatiallyRelated', 'NA']
spatialRelatedObject: ['Dishwasher0', 'NA', 'Table0', 'NA']
houseHoldId: ['9', '11', '10', '12']
objectsInfo: []
readableNames: ['Milkbox', 'Salt', 'Crisps (Brand: Pringles)', 'Bottle']
When the language is Italian (translation is made with Google Translate, hence the accuracy is not guaranteed.): $ rosparam load dm_simple_italian.yaml
$ rosservice call /get_objects_on_map ipa-kitchen false
objects: ['MilkBox0', 'Salt0', 'Pringle0', 'Bottle0']
classesOfObjects: ['Milkbox', 'Salt', 'Pringles', 'Bottle']
spatialRelation: ['spatiallyRelated', 'NA', 'spatiallyRelated', 'NA']
spatialRelatedObject: ['Dishwasher0', 'NA', 'Table0', 'NA']
houseHoldId: ['9', '11', '10', '12']
objectsInfo: []
readableNames: ['Latte', 'Sale', 'patatine (Pringles)', 'bottiglia']
Example of /get_workspace_on_map $ rosservice call /get_workspace_on_map ipa-kitchen false
objects: ['Table0', 'Oven0', 'Stove0', 'Fridge0', 'Sofa0', 'Sink0', 'Dishwasher0']
classesOfObjects: ['Table-PieceOfFurniture', 'Oven', 'StoveTop', 'Refrigerator-Freezer', 'Sofa-
PieceOfFurniture', 'Sink', 'Dishwasher']
objectsInfo: []
houseHoldId: ['7', '1', '3', '4', '5', '2', '6']
#the related id in HHDB (-1 as the default value showing there is no associated model in
HHDB).
readableNames: ['Kitchen Table', 'Oven', 'Stove', 'Fridge', 'Sofa in Living room', 'Sink',
'Dishwasher']
json_properties: ['{"insideOf":"ipa-kitchen"}', '{"insideOf":"ipa-kitchen"}', '{"insideOf":"ipa-
kitchen"}', '{"insideOf":"ipa-kitchen"}', '{"insideOf":"ipa-kitchen"}', '{"insideOf":"ipa-
kitchen"}', '{"insideOf":"ipa-kitchen"}']
### {"insideOf":"ipa-kitchen"} is in the JSON format to indicate that the corresponding workspace
is inside of the room ipa-kitchen.
Coordinates for furniture pieces from IPA kitchen can be also retrieved by setting ifGeometryInfo as true in the service call $ rosservice call /get_workspace_on_map ipa-kitchen true
Additional information, such as 2D projection of the furnitures information should be retrieved from HHDB.
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 105 of 128
Example of /get_rooms_on_map
$ rosservice call /get_rooms_on_map ipa-kitchen false
rooms: ['LivingRoom0', 'ipa-kitchen']
roomsInfo: []
Example of /get_predefined_poses To get all predefined positions and their readable/meaningful names $ rosservice call /get_predefined_poses ipa-kitchen
locations: ['kitchen_backwards', 'ChargingStation0', 'order', 'new_kitchen', 'home', 'kitchen']
poses:
-
x: -2.03999996185
y: -0.300000011921
theta: 3.1400001049
-
x: 1.0
y: -1.60000002384
theta: 1.57000005245
-
x: 1.47000002861
y: -0.699999988079
theta: 0.75
-
x: -2.1400001049
y: 0.0
theta: 0.0
-
x: 0.0
y: 0.0
theta: 0.0
-
x: -2.03999996185
y: 0.300000011921
theta: 0.0
readableNames: ['Kitchen (Backward)', 'Charging Station', 'User (Order Position)', 'Kitchen ',
'Robot Home Position', 'IPA Kitchen']
json_properties: ['{"insideOf":"ipa-kitchen"}', '{"insideOf":"ipa-kitchen"}', '{"insideOf":"ipa-
kitchen"}', '{"insideOf":"ipa-kitchen"}', '{"insideOf":"ipa-kitchen"}', '{"insideOf":"ipa-
kitchen"}']
Example of /get_workspace_for_object Get all possible workspaces for a particular object. Get the types of possible workspaces for a particular object, such as where a Milkbox could be located at. $ rosservice call /get_workspace_for_object Milkbox 0
workspaces: ['Refrigerator-Freezer', 'Cupboard', 'Dishwasher', 'IkeaShelf', 'Table-
PieceOfFurniture']
Or: Get the instances of all possible workspaces for a particular object, such as where a Milkbox could be located at. $ rosservice call /get_workspace_for_object Milkbox 1
workspaces: ['Dishwasher0', 'Fridge0', 'Table0']
Some missing examples/usages Usually, this package can be run simply by launching the launch file in the launch file in package srs_scenario. To start the service individually, run rosrun srs_knowledge knowledgeEngine
and an example script how to test it rosrun srs_knowledge demoplanaction.py
or rosrun srs_knowledge testRosJavaService.py
Mainly, this package is used with srs_decision_making together. rosrun srs_decision_making srs_actions_server.py
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 106 of 128
8.3 GRASPING (SRS_GRASPING)
The grasping tools calls the generator when the goal has not a grasping info in the database. Internally, these tools needs to call some srs_object_database services. For that reason, before use these tools you must launch the srs_db.launch file. More details about the configuration and content of this file can be found in the srs_object_database wiki.
roslaunch srs_object_database srs_db.launch
After launch this file, you must launch:
roslaunch srs_grasping grasping_services.launch
At this point, you can use different grasp tools like the generator, the simulator or the grasp_machine:
rosrun srs_grasping test_generator.py
rosrun srs_grasping test_simulation.py
8.4 SIMPLE SCENARIO (SRS_SCENARIOS)
This section shows how to start the simulation with all related services.
a. For simplify the debugging process, srs scenario in the simulation can be started with one of the following options:
roslaunch srs_scenarios srs_scenario_sim.launch
This scenario use slightly modified generic states from cob_scenarios. Ps, Robot will not grasp object immediately after detection. It is a simple scenario for most of users.
roslaunch srs_scenarios srs_scenario_planned_grasp.launch
This scenario use planned arm movement (IPA) and srs_grasp (ROB). It has been used in the Milan test.
roslaunch srs_scenarios srs_scenario_assisted_grasp.launch
This scenario use srs_assisted_arm_navigation developed by BUT for assisted arm movement
roslaunch srs_scenarios srs_scenario_assisted_detection.launch
This scenario use enable a interactive process of srs_assisted_detection for object detection.
b. Add a milk bottle in the simulation on either kitchen or table c. roslaunch srs_scenarios milk_kitchen.launch d. or
roslaunch srs_scenarios milk_table.launch
e. Start a testing client (The testing client would simulate the input from an interface device)
rosrun srs_decision_making test_client_json.py
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 107 of 128
start the getting milk process
rosrun srs_decision_making test_client_pause.py
Pause the operation
rosrun srs_decision_making test_client_resume.py
Resume the operation
rosrun srs_decision_making test_client_stop.py
Stop the operation
f. Additional client for assisted detection These commands are only for testing purposes, they should be replaced by the commands sent from the UIs
rosrun srs_assisted_detection fake_detection_user.py
Trigger an assisted detection
rosrun srs_assisted_detection fake_action_user.py
Confirm a selected object
srs_assisted_detection fake_bb_user.py
Specify a new bounding box The service below is required for the bounding box related moves
rosrun srs_symbolic_grounding scan_base_pose_server.py
8.5 HUMAN SENSING (SRS_LEG_DETECTOR)
To start the leg detector first make sure you have a source of laser scans,e.g. a laser range finder or a bag file with suitable data and then issue:
roslaunch srs_leg_detector srs_leg_detector_front.launch
8.6 PRIVATE USER INTERFACE (SRS_UI_PRI)
1. Start srs_scenarios
2. Start the topics for UI_PRI error handling (mendatory) srs_scenarios
3. Start srs_mixed_reality_server
4. Start rosbridge
rosrun rosbridge rosbridge.py
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 108 of 128
4. Start the app on your iPad
8.7 LOCAL USER INTERFACE (SRS_UI_LOC)
1. Start srs_scenarios on the computer with the hostname you have configured in Settings 2. On the same computer, start srs_mixed_reality_server 3. Start SRS app on device 4. Change settings if needed as described above 5. Touch "SRS" button to start UI_LOC 6. Use buttons provided by UI_LOC to navigate through UI_LOC (soft phone buttons are
deactivated)
Note1: The first time you start UI_LOC, there won't be any contacts to call. To change that, you need to restart UI_LOC: in the Main menu (two grey, one red button) open the android menu, choose "Exit" and then touch the "SRS" button again.
Note2: A contact photo is transmitted during the first call to that contact. However, it won't be displayed before restarting UI_LOC. Therefore, in order to have all contact photos, one needs to call all contacts (only ringing them does not work) and then restart UI_LOC. The contact photos will be saved permanently.
Other To read the phone log or to debug the application (with adb or eclipse), one needs to activate USB debugging: Settings->Applications->Development->USB debugging->check.
8.8 ASSISTED ARM NAVIGATION (SRS_ASSISTED_ARM_NAVIGATION)
For starting COB simulation, actionlib server use this:
roslaunch srs_assisted_arm_navigation but_arm_nav_sim.launch
and then next command to start user interface (RVIZ):
roslaunch srs_assisted_arm_navigation_ui rviz.launch
There is an example of a rospy script which can be used to "trigger" manipulation task. Run it and operator should be asked for action by messagebox from RVIZ plugin.
#!/usr/bin/env python
import roslib; roslib.load_manifest('your_package')
import rospy
import actionlib
from srs_assisted_arm_navigation_msgs.msg import *
def main():
rospy.init_node('arm_manip_action_test')
rospy.loginfo("Node for testing actionlib server")
client =
actionlib.SimpleActionClient('/but_arm_manip/manual_arm_manip_action',ManualArmManipAction)
rospy.loginfo("Waiting for server...")
client.wait_for_server()
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 109 of 128
goal = ManualArmManipGoal()
goal.allow_repeat = False
goal.action = "Move arm to arbitrary position"
goal.object_name = ""
client.send_goal(goal)
client.wait_for_result()
rospy.loginfo("I have result!! :-)")
result = client.get_result()
if result.success:
rospy.loginfo("Success!")
if result.failed:
rospy.loginfo("Failed :-(")
rospy.loginfo("Time elapsed: %ss",result.time_elapsed.to_sec())
if __name__ == '__main__':
main()
8.9 INTERACTION PRIMITIVES (SRS_INTERACTION_PRIMITIVES)
Adding primitives from C++ // Create Interactive Marker Server
InteractiveMarkerServerPtr server;
server.reset(new InteractiveMarkerServer("test_primitives", "", false));
// Create billboard - /world is frame and my_billboard is primitives unique name
Billboard *billboard = new Billboard(server, "/world", "my_billboard");
billboard->setType(srs_interaction_primitives::BillboardType::PERSON); // type PERSON
billboard->setPose(pose); // actual position of the billboard
billboard->setPoseType(srs_interaction_primitives::PoseType::POSE_CENTER); // coordinates are int
the center of the primitive
billboard->setScale(scale); // scale of the billboard
billboard->setDirection(direction); // actual movement direction
billboard->setVelocity(velocity); // actual movement velocity
billboard->setDescription("This is me!"); // description
billboard->insert(); // creates interaction primitive a inserts it into IMS
Using predefined services
Run service server
rosrun srs_interaction_primitives interaction_primitives_service_server
Run RViz
rosrun rviz rviz
Add Interactive Marker Display and set Update Topic to /interaction_primitives/update.
Now you can call services and add or update Primitives.
Calling services from bash Add Bounding Box rosservice call /interaction_primitives/add_bounding_box '{frame_id: /world, name: bbox,
object_name: obj, description: "", 0, pose: { position: { x: -1, y: 0, z: 0 }, orientation: { x:
0, y: 0, z: 0, w: 1 } }, scale: { x: 1, y: 1, z: 1 }, color: { r: 1, g: 0, b: 0 }}'
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 110 of 128
Add Billboard rosservice call /interaction_primitives/add_billboard '{frame_id: /world, name: billboard, type:
2, description: "this is billboard", velocity: 5.6, direction: {x: 1, y: 1, z: 0, w: 1}, 0, pose:
{position: { x: -1, y: 0, z: 0 }, orientation: { x: 0, y: 0, z: 0, w: 1 } }, scale: { x: 1, y: 1,
z: 1 }}'
Add Plane rosservice call /interaction_primitives/add_plane '{frame_id: /world, name: plane, description:
"", 0, pose: { position: { x: -1, y: 0, z: 5 }, orientation: { x: 0, y: 1, z: 0, w: 1 } }, scale:
{ x: 1, y: 1, z: 1 }, color: { r: 1, g: 0.3, b: 0, a: 1.0 }}'
Add Object rosservice call /interaction_primitives/add_object '{frame_id: /world, name: table_obj,
description: "My table", 0, pose: { position: { x: -1, y: 0, z: 0 }, orientation: { x: 0, y: 0, z:
0, w: 1 } }, scale: { x: 1, y: 1, z: 1 }, resource:
"package://gazebo_worlds/Media/models/table.dae", use_material: false, color: {r: 1, g: 0, b: 1,
a: 1 }}'
Add Unknown Object rosservice call /interaction_primitives/add_unknown_object '{frame_id: /world, name: uobj,
description: "", 0, pose: { position: { x: -1, y: 0, z: 0 }, orientation: { x: 0, y: 0, z: 0, w:
1 } }, scale: { x: 1, y: 1, z: 1 } }'
Add Object rosservice call /interaction_primitives/add_object '{frame_id: /world, name: milk, description:
"Detected milk", 0, pose: { position: { x: -1, y: 0, z: 0 }, orientation: { x: 0, y: 0, z: 0, w:
1 } }, bounding_box_lwh: {x: 1.0, y: 0.2, z: 0.1}, color: {r: 1, g: 1, b: 0}, resource:
package://cob_gazebo_worlds/Media/models/milk_box.dae, use_material: True}'
Change pose rosservice call /interaction_primitives/change_pose '{name: plane, pose: { position: { x: -1, y:
0, z: 0 }, orientation: { x: 0, y: 0, z: 0, w: 1 } }}'
Change scale rosservice call /interaction_primitives/change_scale '{name: plane, scale: { x: 1.2, y: 2, z:
0.5 }}'
Change color rosservice call /interaction_primitives/change_color '{name: plane, color: { r: 1, g: 0, b: 1, a:
0.5}}'
Change description rosservice call /interaction_primitives/change_description '{name: plane, description:
"something"}'
Set pre-grasp position rosservice call /interaction_primitives/set_pregrasp_position '{name: object_name, pos_id: 1,
position: {x: 0.2, y: 0.1, z: -0.5}}'
Remove pre-grasp position rosservice call /interaction_primitives/remove_pregrasp_position '{name: object_name, pos_id: 1}'
Remove object rosservice call /interaction_primitives/remove_object '{name: object_name}'
Calling services from C++ #include <ros/ros.h>
#include <srs_interaction_primitives/AddBoundingBox.h>
int main(int argc, char **argv)
{
// ROS initialization (the last argument is the name of the node)
ros::init(argc, argv, "interaction_primitives_client");
// NodeHandle is the main access point to communications with the ROS system
ros::NodeHandle n;
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 111 of 128
// Create a client for the add_bounding_box service
ros::ServiceClient bboxClient =
n.serviceClient<srs_interaction_primitives::AddBoundingBox>("interaction_primitives/add_bounding_
box");
// Set parameters to new Bounding Box
srs_interaction_primitives::AddBoundingBox bboxSrv;
bboxSrv.request.name = "Bounding box";
bboxSrv.request.object_name = "attached_object_name";
bboxSrv.request.frame_id = "/world";
bboxSrv.request.description = "This is Bounding Box";
bboxSrv.request.pose_type = srs_interaction_primitives::PoseType::POSE_BASE;
bboxSrv.request.pose.position.x = 1.0;
bboxSrv.request.pose.position.y = 1.0;
bboxSrv.request.pose.position.z = 1.0;
bboxSrv.request.pose.orientation.x = 0.0;
bboxSrv.request.pose.orientation.y = 0.0;
bboxSrv.request.pose.orientation.z = 0.0;
bboxSrv.request.pose.orientation.w = 1.0;
bboxSrv.request.scale.x = 2.0;
bboxSrv.request.scale.y = 3.0;
bboxSrv.request.scale.z = 4.0;
bboxSrv.request.color.r = 1.0;
bboxSrv.request.color.g = 0.0;
bboxSrv.request.color.b = 1.0;
bboxSrv.request.color.a = 1.0;
// Call service with specified parameters
bboxClient.call(bboxSrv);
ros::spinOnce(); // Call all the callbacks waiting to be called
return 0;
}
8.10 ENVIRONMENT PERCEPTION (SRS_ENV_MODEL_PERCP)
A few examples of starting described nodes:
normal start of segmenter node
rosrun srs_env_model_percp but_segmenter
start of segmenter node with method specified (depth only segmentation)
rosrun srs_env_model_percp but_segmenter -type depth
start of segmenter node with maximum depth specified (in milimeters)
rosrun srs_env_model_percp but_segmenter -maxdepth 4500
normal start of plane detector node (you must specify the input)
rosrun srs_env_model_percp but_plane_detector -input pcl
start of plane detector node with target frame id specified
rosrun srs_env_model_percp but_plane_detector -input pcl -target /world
8.11 SPACENAV TELEOP (COB_SPACENAV_TELEOP)
To test with simulated Care-O-Bot run following command:
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 112 of 128
roslaunch cob_spacenav_teleop spacenav_teleop_test_sim.launch
To test with a real robot, connect SN and launch its driver:
rosrun spacenav_node spacenav_node
Launch teleop:
roslaunch cob_spacenav_teleop spacenav_teleop.launch
8.12 ARM NAVIGATION TESTS (SRS_ARM_NAVIGATION_TESTS)
File arm_manip_generic_states.py implements two smach generic states which can be used as part of state machine:
move_arm_to_given_positions_assisted o for user assisted trajectory planning i.e. for move the arm to the pre-grasp position
move_arm_from_a_given_position_assisted o for user assisted navigation to a safe position
launch needed stuff:
roslaunch srs_arm_navigation_tests fake_dm_test.launch
spawn milk:
roslaunch srs_arm_navigation_tests milk_box.launch
run the fake decision making
roslaunch srs_arm_navigation_tests fake_dm_only.launch
It will test both generic states in this order: o move_arm_to_given_positions_assisted o move_arm_from_a_given_position_assisted
Fake_dm will try to use srs_grasping services to obtain pregrasp position. If it's not possible fake ones will be used (just to test visualization).
8.13 ASSISTED ARM NAVIGATION (SRS_ASSISTED_ARM_NAVIGATION)
First, please take a look on how to install and use user interface for assisted arm manipulation.
For starting COB simulation, actionlib server use this:
roslaunch srs_assisted_arm_navigation but_arm_nav_sim.launch
and then next command to start user interface (RVIZ):
roslaunch srs_assisted_arm_navigation_ui rviz.launch
There is an example of a rospy script which can be used to "trigger" manipulation task. Run it and operator should be asked for action by messagebox from RVIZ plugin.
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 113 of 128
#!/usr/bin/env python
import roslib; roslib.load_manifest('your_package')
import rospy
import actionlib
from srs_assisted_arm_navigation_msgs.msg import *
def main():
rospy.init_node('arm_manip_action_test')
rospy.loginfo("Node for testing actionlib server")
client =
actionlib.SimpleActionClient('/but_arm_manip/manual_arm_manip_action',ManualArmManipAction)
rospy.loginfo("Waiting for server...")
client.wait_for_server()
goal = ManualArmManipGoal()
goal.allow_repeat = False
goal.action = "Move arm to arbitrary position"
goal.object_name = ""
client.send_goal(goal)
client.wait_for_result()
rospy.loginfo("I have result!! :-)")
result = client.get_result()
if result.success:
rospy.loginfo("Success!")
if result.failed:
rospy.loginfo("Failed :-(")
rospy.loginfo("Time elapsed: %ss",result.time_elapsed.to_sec())
if __name__ == '__main__':
main()
8.14 UI BUT (SRS_UI_BUT)
Distance Liner Visualizer
In display properties you can set these options:
Topic - point cloud topic from which the closest point should be calculated (if you have more Distance Linear Visualizer displays, the change of the topic will affect all of them)
Link - link from which you want to show the closest point Color - color of the line and text Alpha - transparency of the line and the text Line Thickness - thickness of the line Draw distance - it will draw a text with the distance into the scene if checked.
Distance Circular Indicator
In display properties you can set these options:
Link - link from around which you want to show distance circles Orientation - orientation of the circles Color - color of the circles and text Alpha - transparency of the circle and text Levels - number of levels (circles) Radius - distance between circles
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 114 of 128
Thickness - thickness of the circles Show distance - it will draw a text with the distances between circles
COB Stretch Indicator
In display properties you can set these options:
Rendering options o Style - you can switch between Square and Circle o Thickness - thickness of rendered lines or circles o Correction - Radius correction due to the difference in thickness of the arm, gripper and
fingers o Show distance - it will draw a text with the stretch size
Color options o Color - color o Alpha - transparency
Bounding box - with these option you can render bounding box around the robot o Show bounding box - enables bounding box rendering o Range - range between the circles or squares
Camera Display
You can optionally visualize view frustum by adding Marker display and subscribe it to /but_data_fusion/view_frustum topic.
Subscribe CButCamDisplay through Marker Topic to /but_data_fusion/cam_view. Through Image Topic you can subscribe display to any of camera topics:
/stereo/left/image_raw /stereo/right/image_raw /cam3d/rgb/image_raw
There are basically three properties you can set, the rest is only informational:
Alpha - Transparency of polygon with video frames (between 0.0 and 1.0). Video Distance - Distance of polygon between sensor and closest point cloud point (between 0.0
and 1.0; 0.0 means position of sensor, 1.0 distance of closest point cloud point, 0.5 id half way between sensor and point cloud etc.).
Draw Behind - Remnants of Map display with the same functionality.
Object Manager
For adding new objects use Object control panel:
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 115 of 128
You can set these options:
Frame - Fixed Frame Position - position where you want to put the object Description - object's description Color - object's color
Add object
Selected the object from the combo box and click on the button Add object.
Remove object
If you want to remove some object, just select it from the combo box and click on the Remove object button.
Note:
Not all objects from database have their meshes in the database. Position to place the object can be estimated by clicking into the scene with usage of the 2D
Pose Estimate tool (temporary solution).
9 APPENDIX 2-SOFTWARE CONPONENTS INSTALLATION
In this section, installation/configuration of the software components is given, which has been proved in Section 3: Proof of Technology.
9.1 MIXED REALITY SERVER (SRS_MIXED_REALITY_SERVER)
Installation
These SRS package is not open to the public. If you want to run the packages, please write to goa@ipa.fhg.de, fmw@ipa.fhg.de. You can then fetch the source from the github.
rosmake srs_mixed_reality_server
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 116 of 128
Configuration
The node can be configured via its launch file – MRS.launch. You can customize the virtual map size and the augmentation level – 0 draw map only, 1 draw map and graphical representation of the objects (rectangles and ellipses) and 2 draw map with image icons.
<param name="mapWidth" value="1024" type="int"/>
<param name="mapHeight" value="1024" type="int"/>
<param name="augmentationLevel" value="2" type="int"/>
To start Mixed Reality Server roslaunch srs_mixed_reality_server MRS.launch
Dependencies
In order to generate augmented map you need following components compiled and running:
roscore
Care-O-bot simulation in gazebo or the real robot
cob_2dnav (Care-O-Bot navigation stack)
srs_knowledge (SRS Knowledge Database)
srs_object_database (The Household Objects Database)
rosbridge (to communicate to the UI_PRI)
9.2 KNOWLEDGE BASE (SRS_KNOWLEDGE)
INSTALL
To be able to compile it, rosjava_jni is needed here (although rosjava can be considered in future due to the fact that rosjava_jni is unsupported in future).
Also, JAVA_HOME should be set. Put the line below in .bashrc (if under Ubuntu 11.04. For other systems, the path should be changed accordingly).
export JAVA_HOME=/usr/lib/jvm/java-6-openjdk
Environment variable of ROBOT_ENV must be set properly, as it will be read by the program to decide which rdf file to be loaded. With the default simulation
export ROBOT_ENV=ipa-kitchen
9.3 GRASPING (SRS_GRASPING)
Installation
This ppackage depends on OpenRAVE. The faster option is to install OpenRAVE from its repository:
sudo add-apt-repository ppa:openrave/release
sudo apt-get update
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 117 of 128
sudo apt-get install openrave
Another option is to install the openrave_planning package.
We recommend the first option to avoid configuration problems with environment variables.
9.4 PRIVATE USER INTERFACE (SRS_UI_PRI)
INSTALL
This shows how to install srs ui pri (srs_ui_pri). 1. Add srs.ipa to iTunes
2. Sync your iPad with iTunes.
CONFIGURATION
Go to Settings app and select SRS
1. In Remote Host enter
ws://IP:9090
where IP is the IP address of the machine you are running rosbridge and srs_decision_making
2. In the Video feed 1 enter
http://IP:8080/stream?topic=/stereo/left/image_raw
3. In the Video feed 2 enter
http://IP:8080/stream?topic=/stereo/right/image_raw
4. Dynamic joysticks means whether the joystick can be moved around by touching longer on the place where you want to move the joystic to
5. Auto hide means whether joystick will fade out after it is released.
6. Hand manipulation means whether hand controls will show up in the Manual Control view
7. In the Map Server enter the following
http://IP:8080/snapshot?topic=/map
9.5 LOCAL USER INTERFACE (SRS_UI_LOC)
INSTALL
This shows how to install srs ui loc (srs_ui_loc).
1. Allow installing apps from unknown sources: Settings->Applications->Unknown sources->check 2. Copy srs_ui_loc.apk onto the device (Samsung Galaxy SII) 3. Install it by opening it
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 118 of 128
CONFIGURATION
After installing the apk file, you will find a new app called "SRS" in your apps list. Start the app and follow these instructions to configure it correctly:
Touch the "Settings" button Type in the hostname of the computer running the ROS server (e.g. "cob3-3-pc1") Type in the apartment name (e.g. "ipa-kitchen") Type in the skype user name for the UI_LOC user Type in the skype user password for the UI_LOC user Type in the skype user name used for emergency calls Touch the "Edit etc/hosts file" button add/change entries as necessary (format for each line: "ip-address hostname", e.g.
"192.168.0.101 cob3-3-pc1") Touch "Save" button Touch "Save Settings" button
Note1: You need to scroll on the Settings screen in order to reach all parts of it. Note2: Type in all necessary configuration data without quotes.
9.6 ASSISTED ARM NAVIGATION (SRS_ASSISTED_ARM_NAVIGATION)
Installation
install COB stacks from Ubuntu repository or from git (see installation instructions)
checkout srs and srs_public git repositories
compile srs_assisted_arm_navigation package
rosmake srs_assisted_arm_navigation
9.7 INTERACTION PRIMITIVES (SRS_INTERACTION_PRIMITIVES)
Installation
Both components are in srs git in srs_interaction_primitives package and can be compiled with ROS standard tool rosmake
rosmake srs_interaction_primitives
9.8 ENVIRONMENT MODEL (SRS_ENV_MODEL)
Installation
All components are in srs git in srs_env_model package and can be compiled with ROS standard tool rosmake
rosmake srs_env_model
Configuration Octo map plugin
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 119 of 128
For further information about octomap library and meaning of its parameters see: http://octomap.sourceforge.net
Octomap and sensor parameters:
Parameter name Description Default value
resolution Octomap resolution 0.1
sensor_model/hit Probability value set to the occupied node. 0.7
sensor_model/miss Probability value set to the free node 0.4
sensor_model/min Clamping minimum threshold 0.12
sensor_model/max Clamping maximum threshold 0.97
max_range Maximal sensor scope -1.0 - no range check
octomap_publishing_topic Binary octomap publishing topic name butsrv_binary_octomap
Outdated node filtering parameters:
Parameter name Description Default value
camera_info_topic Camera info topic name /cam3d/camera_info
visualize_markers Should markers displayng camera cone be visualized?
true
markers_topic Markers topic name visualization_marker
camera_stereo_offset_left Stereo camera left eye offset 128
camera_stereo_offset_right Stereo camera right eye offset 0
Ground plane filtering parameters:
Parameter name Description Default value
filter_ground Should ground plane be filtered? false
ground_filter/distance Distance of points from plane for RANSAC 0.04
ground_filter/angle Angular derivation of ground plane 0.15
ground_filter/plane_distance Distance of found plane from z=0 to be detected as ground (e.g. to exclude tables)
0.07
Collision map plugin
Parameter name Description Default value
collision_map_octree_depth Used octree depth for map generation. 0 - whole tree
collision_map_radius Collision map maximal radius in meters 2.0
collision_map_publisher Collision map publishing topic name butsrv_collision_map
collisionmap_frame_id FID to which will be points transformed when publishing collision map
/base_footprint
Collision grid plugin
Parameter name Description Default value
collision_grid_octree_depth Used octree depth for grid generation. 0 - whole tree
grid_min_x_size Collision grid minimal x 0.0
grid_min_y_size Collision grid minimal y 0.0
collision_grid_publisher Collision grid publishing topic name butsrv_collision_map
Collision objects plugin
Parameter name Description Default value
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 120 of 128
collision_object_publisher Collision object publishing topic name butsrv_collision_object
collision_object_frame_id Output frame id /map
collision_object_octree_depth Used octree depth for objects generation. 0 - whole tree
2D map plugin
Parameter name Description Default value
collision_object_publisher Collision map publishing topic name butsrv_map2d_object
collision_object_frame_id Output map frame id /map
min_x_size Minimal map size - used for padding 0.0
min_y_size Minimal map size - used for padding 0.0
collision_object_octree_depth Used octree depth for objects generation. 0 - whole tree
Marker array plugin
Parameter name Description Default value
marker_array_publisher Marker array publishing topic name butsrv_markerarray_object
marker_array_frame_id Marker array output frame id /map
marker_array_octree_depth Used octree depth for objects generation. 0 - whole tree
Point cloud plugin
Parameter name Description Default value
pointcloud_centers_publisher Pointcloud publishing topic name butsrv_pointcloud_centers
pointcloud_subscriber Input pointcloud subscriber topic name
/cam3d/depth/points
pointcloud_min_z Point cloud minimal z value -std::numeric_limits<double>::max()
pointcloud_max_z Point cloud maximal z value std::numeric_limits<double>::max()
pointcloud_octree_depth Used octree depth for objects generation.
0 - whole tree
9.9 ENVIRONMENT PERCEPTION (SRS_ENV_MODEL_PERCP)
Installation
All components are in srs git in srs_env_model_percp package and can be compiled with ROS standard tool rosmake
rosmake srs_env_model_percp
Configuration
Optional parameters for each node. Attention, since this module is flagged as under construction, several parameters are not possible to set up yet.
Depth map segmentation
List of but_segmenter's optional parameters:
-type [TYPE] o Parameter type specifies type of segmenting method o depth - Segmenter uses only depth information o normal - Segmenter uses only normal information (delegates also std deviation image)
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 121 of 128
o combined - Segmenter uses combined depth and normal information (delegates also std deviation image)
o predictor - Segmenter uses predictor plane algorithm o tile - Segmenter uses tiling plane segmentation (delegates also std deviation image) o Default is combined method.
-maxdepth [NUM] o Number how deep point should be considered for computation (in milimeters)" o Default is 3000
Plane Fitting
List of but_plane+detector's optional parameters:
-input [VAL] o pcl - Plane detector will suppose pcl point cloud as an input o kinect - Plane detector will suppose Kinect depth map as an input o Necessary parameter, please specify the input
-target [FRAME_ID] o Target frame id of sent planes. o Attention, the tf path from point cloud frame id into target id must exist!
Dependencies
opencv2 eigen roscpp image_transport camera_calibration_parsers cv_bridge std_msgs pcl visualization_msgs pcl_ros srs_env_model srs_interaction_primitives
9.10 ARM NAVIGATION TESTS (SRS_ARM_NAVIGATION_TESTS)
Installation
install srs_assisted_arm_navigation, srs_assisted_arm_navigation_ui and srs_assisted_grasping build srs_arm_navigation_tests with
rosmake srs_arm_navigation_tests
9.11 ASSISTED ARM NAVIGATION (SRS_ASSISTED_ARM_NAVIGATION)
Installation
install COB stacks from Ubuntu repository or from git (see installation instructions) checkout srs and srs_public git repositories
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 122 of 128
compile srs_assisted_arm_navigation package
rosmake srs_assisted_arm_navigation
9.12 ENV MODEL (SRS_ENV_MODEL)
Installation
All components are in srs git in srs_env_model package and can be compiled with ROS standard tool rosmake
rosmake srs_env_model
Configuration Octo map plugin
For further information about octomap library and meaning of its parameters see: http://octomap.sourceforge.net
Octomap and sensor parameters:
Parameter name Description Default value
resolution Octomap resolution 0.1
sensor_model/hit Probability value set to the occupied node. 0.7
sensor_model/miss Probability value set to the free node 0.4
sensor_model/min Clamping minimum threshold 0.12
sensor_model/max Clamping maximum threshold 0.97
max_range Maximal sensor scope -1.0 - no range check
octomap_publishing_topic Binary octomap publishing topic name butsrv_binary_octomap
Outdated node filtering parameters:
Parameter name Description Default value
camera_info_topic Camera info topic name /cam3d/camera_info
visualize_markers Should markers displayng camera cone be visualized?
true
markers_topic Markers topic name visualization_marker
camera_stereo_offset_left Stereo camera left eye offset 128
camera_stereo_offset_right Stereo camera right eye offset 0
Ground plane filtering parameters:
Parameter name Description Default value
filter_ground Should ground plane be filtered? false
ground_filter/distance Distance of points from plane for RANSAC 0.04
ground_filter/angle Angular derivation of ground plane 0.15
ground_filter/plane_distance Distance of found plane from z=0 to be detected as ground (e.g. to exclude tables)
0.07
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 123 of 128
Collision map plugin
Parameter name Description Default value
collision_map_octree_depth Used octree depth for map generation. 0 - whole tree
collision_map_radius Collision map maximal radius in meters 2.0
collision_map_publisher Collision map publishing topic name butsrv_collision_map
collisionmap_frame_id FID to which will be points transformed when publishing collision map
/base_footprint
Collision grid plugin
Parameter name Description Default value
collision_grid_octree_depth Used octree depth for grid generation. 0 - whole tree
grid_min_x_size Collision grid minimal x 0.0
grid_min_y_size Collision grid minimal y 0.0
collision_grid_publisher Collision grid publishing topic name butsrv_collision_map
Collision objects plugin
Parameter name Description Default value
collision_object_publisher Collision object publishing topic name butsrv_collision_object
collision_object_frame_id Output frame id /map
collision_object_octree_depth Used octree depth for objects generation. 0 - whole tree
2D map plugin
Parameter name Description Default value
collision_object_publisher Collision map publishing topic name butsrv_map2d_object
collision_object_frame_id Output map frame id /map
min_x_size Minimal map size - used for padding 0.0
min_y_size Minimal map size - used for padding 0.0
collision_object_octree_depth Used octree depth for objects generation. 0 - whole tree
Marker array plugin
Parameter name Description Default value
marker_array_publisher Marker array publishing topic name butsrv_markerarray_object
marker_array_frame_id Marker array output frame id /map
marker_array_octree_depth Used octree depth for objects generation. 0 - whole tree
Point cloud plugin
Parameter name Description Default value
pointcloud_centers_publisher Pointcloud publishing topic name butsrv_pointcloud_centers
pointcloud_subscriber Input pointcloud subscriber topic name
/cam3d/depth/points
pointcloud_min_z Point cloud minimal z value -std::numeric_limits<double>::max()
pointcloud_max_z Point cloud maximal z value std::numeric_limits<double>::max()
pointcloud_octree_depth Used octree depth for objects generation.
0 - whole tree
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 124 of 128
9.13 UI BUT (SRS_UI_BUT)
Installation
Build package:
rosmake srs_ui_but
Distance Linear Visualizer
First you need to run but_service_server:
rosrun srs_ui_but but_services_server
Then run rviz:
rosrun rviz rviz
Load plugin and add Distance Linear Visualizer display.
Distance Circular Indicator
Run rviz:
rosrun rviz rviz
Load plugin but_gui and add Distance Circular Indicator display.
COB Stretch Indicator
First you need to run simulation and cob_stretch_publisher:
rosrun srs_ui_but cob_stretch_publisher
Then run rviz:
rosrun rviz rviz
Load plugin and add COB Stretch Indicator display.
Camera Display
Run everything together with launch file:
roslaunch srs_ui_but but_data_fusion.launch
Or launch manually all needed components:
rosrun srs_ui_but view
roslaunch cob_2dnav 2dnav.launch
Run rviz:
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 125 of 128
rosrun rviz rviz
Load plugin but_gui and add CButCamDisplay display.
Object Manager
For example usage just run
roslaunch srs_ui_but object_manager_test.launch
Configuration
...optional parameters...
parameters
GUI Primitives for HRI
...component specific parameters...
command
Dynamic Environment Model
...component specific parameters...
command
Dependencies
...list of required non-standard packages, libraries, etc.
xyz
10 APPENDIX 3-PROOF OF ACCEPTANCE
10.1 INTRODUCTION
In this article, the main targets is to prove the system performance in technology point of view. While during the user tests, the acceptance has also been analysed at the same time. In these tests, we have combined ad-hoc questions with the AttrackDiff, selected to measure user experience in a simple and immediate manner. We also collected the user perception of the robot and the system by oral questionnaire and conversation.
10.2 TESTS RESULTS
The test results, which are the proof of acceptance, are attached in the Table below.
Time Location Participants group
Results and discussion
1 Jan 2012 San Sebastian, Spain
Local user Although the robotic arm presented the peculiar appearance and technological restrictions, the system
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 126 of 128
did not evoke responses of fear or rejection in the participants. Users do not seem afraid of the system and their behavior shows that they think that the system is safe. The acceptance ad-hoc questions show the majority of positive perceptions on the functionalities of the robotic arm and the intension to use it, and the importance of autonomy and sense of independence in this population. The result also shows the necessity to take into account the cost-effectiveness assessment & socio-economic implications in task 6.4. Summarizing the opinions of the users regarding the system, the common view is that the system is useful for the people alone and hardly movable, however it’s big for an apartment; the people would be willing to learn to use it however they are little bit worried about the learnability of this system.
2 Jan 2012 San Sebastian, Spain
Professional User
Professionals agree that the technology is inventive and interesting, but also little bit slow and occasionally not clearly meeting the expectations of the users. Regarding the acceptance ad-hoc questions, the professionals interviewed consider the robotic arm useful for the frail older adults and interesting for their work; 58% of the participants responded positively to the possibility of using the system when it becomes available, 42% gave neutral responses and no negative responses were collected.
3 Feb 2012 Stuttgart, Germany
Local user In general the two elderly test users felt safe with the robot and had trust in it. The following statements were made:
Robot appearance: It was stated that the robot looks very “friendly” and “sympathetic”. The design of the robot was described as “very good”. A stated reason was that the technology is hidden and the robot looks like a home product. It was stated that it “fits into an apartment like an armchair”. The round contours and soft outer shell were mentioned positively. The robot’s size was perceived as “good”.
Robot movement and sound: Operation was found to be very quiet, which was perceived positively. Robot movement was perceived as slow and smooth. Wheeled operation was preferred over the possibility of a biped robot: “It’s nice that it’s on wheels and not walking unnaturally.”
Robot safety: When asked about their impression of robot safety, the participants both agreed to “feel safe”.
4 May 2012 Milan, Italy Local user The majority of the elderly interviewed liked very much the idea of a robot helping them at home, with
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 127 of 128
the possibility to be remotely monitored and supported by family or professionals operators. In general they are not scared (even if few of them were, mostly because of the arm), are curious and amused by the novelty and they found the usage experience very funny and interesting.
With regard to the 23 year old man on wheelchair, it can be interesting to say that he is also amused by SRS system and possibilities but he also agree that to become a useful assistive device it still need a lot of development. However, the main difference with elderly people is that, despite the lowest Barthel index (33/100), he feels much more sure to be able to use the system by himself; despite he also has some handling and vision difficulties linked to his disability, he is in fact more skilled in using the UI-LOC device. Even if this is the point of view of a single man, this result is an encouraging, it make think that the SRS system could be exploited not only for elderly people but also for young people with motor disabilities.
5 May 2012 Milan, Italy Private user Most of the caregivers were quite satisfied of the concept of the robot, considered valuable for the elderly to enhance the management of everyday tasks and support in case of scarce mobility. They also liked that the robot could give them the possibility to constantly monitor the health of the assisted person at a distance, thanks to visual control and speaking option, thus allowing for more freedom and more free time.
Nevertheless, the qualitative results highlighted that, though appreciating the idea in theory, there are some doubts linked to the limited range of activities accomplished by the robot at the moment, and to the fact of dealing with a prototype, that creates some worries concerning a potential failure of the robot in case of need.
6 May 2012 Milan, Italy Professional user
The people interviewed in general liked the concept of SRS, the idea of the possibility of audio-video feedback to check consciousness status of the assisted together with the possibility to execute some tasks as reaching objects located in difficult places to be reached. And, they are more aware of current state of development of other comparable products and projects.
10.3 SUMMARY
To summary, the user acceptance results show the high acceptance rate of the system. The majority people like the concept and design of SRS, which include local user, private user and professional user groups. They are all feel safe and peaceful in front of the robot. Although there are some issues need to be mentioned here for the further development: the elder people are little bit concern about the learnability; the private caregivers more concerns about the potential failure; and the professional
SRS Deliverable 5.3-Final Due date: March 2013
FP7 ICT Contract No. 247772 1 February 2010 – 30 April 2013 Page 128 of 128
operators are more concerning the current state of development of other comparable products and projects.
Recommended