16
ADAPT ADAPT IST-2001-37173 IST-2001-37173 Artificial Development Approach to Presence Technologies 2 nd Review Meeting Munich, June 7-9 th , 2004

ADAPT IST-2001-37173

Embed Size (px)

DESCRIPTION

ADAPT IST-2001-37173. Artificial Development Approach to Presence Technologies. 2 nd Review Meeting Munich, June 7-9 th , 2004. Consortium. Total cost: 1.335.141 € - Community funding: 469.000 € Project start date: October 1 st , 2002 Project duration: 36 months. Goal. We wish to… - PowerPoint PPT Presentation

Citation preview

Page 1: ADAPT IST-2001-37173

ADAPTADAPTIST-2001-37173IST-2001-37173

Artificial Development Approach to Presence Technologies

2nd Review MeetingMunich, June 7-9th, 2004

Page 2: ADAPT IST-2001-37173

Project funded by the Future and Emerging Technologies arm of the IST ProgrammeProject funded by the Future and Emerging Technologies arm of the IST ProgrammePresence Research InitiativePresence Research Initiative

ConsortiumConsortium

Total cost: 1.335.141 € - Community funding: 469.000 €

Project start date: October 1st, 2002 Project duration: 36 months

Participant Role Participant Name Short Name CountryCoordinatorGiulio Sandini

DIST – University of Genova DIST I

PartnerRolf Pfeifer

University of Zurich, Dept. of Information Technology

UNIZH CH

PartnerJacquline Nadel

UMR7593, CNRS, University Pierre & Marie Curie, Paris

CNRS F

Page 3: ADAPT IST-2001-37173

Project funded by the Future and Emerging Technologies arm of the IST ProgrammeProject funded by the Future and Emerging Technologies arm of the IST ProgrammePresence Research InitiativePresence Research Initiative

GoalGoal

We wish to……understand the process of building a coherent representation of visual, auditory, haptic, and

kinesthetic sensations

process developmentprocess dynamic representation

Perhaps, once we “know” how it works, we can “ask” a machine to use this knowledge to elicit the

sense of presence

Page 4: ADAPT IST-2001-37173

Project funded by the Future and Emerging Technologies arm of the IST ProgrammeProject funded by the Future and Emerging Technologies arm of the IST ProgrammePresence Research InitiativePresence Research Initiative

So, we are asking…So, we are asking…

How do we represent our world and,in particular,

how do we representthe objects

we interact with?

Our primary mode of interaction with objects isthrough manipulation,

that is, by grasping objects!

Page 5: ADAPT IST-2001-37173

Project funded by the Future and Emerging Technologies arm of the IST ProgrammeProject funded by the Future and Emerging Technologies arm of the IST ProgrammePresence Research InitiativePresence Research Initiative

Two-pronged approachTwo-pronged approach Study how infants do it Implement a “similar” process in an artificial

system

Learning by doing: modeling abstract principles build new devices

Page 6: ADAPT IST-2001-37173

Project funded by the Future and Emerging Technologies arm of the IST ProgrammeProject funded by the Future and Emerging Technologies arm of the IST ProgrammePresence Research InitiativePresence Research Initiative

Scientific prospectScientific prospect

From the theoretical point of view:– Studying the nature of “representation”

From development: developmental path

– Interacting with objects: multi-sensory representation, object affordances– Interpreting others/object interaction: imitation

From embodiment and morphology– Why do we need a body? How morphology influences/supports computation?

Computational architecture– How can an artificial system learn representations to support similar behaviors?

Page 7: ADAPT IST-2001-37173

Project funded by the Future and Emerging Technologies arm of the IST ProgrammeProject funded by the Future and Emerging Technologies arm of the IST ProgrammePresence Research InitiativePresence Research Initiative

Vision Touch

Streri & Gentaz (2003, 2004)

Reversible cross-modal transfer between Reversible cross-modal transfer between hand and eyes in newborn infantshand and eyes in newborn infants

Transfer of shape is not reversible

Page 8: ADAPT IST-2001-37173

Project funded by the Future and Emerging Technologies arm of the IST ProgrammeProject funded by the Future and Emerging Technologies arm of the IST ProgrammePresence Research InitiativePresence Research Initiative

6-month-olds detect a violation of intermodality between face and voice

A teleprompter device allows to delay independently voice or image

Page 9: ADAPT IST-2001-37173

Project funded by the Future and Emerging Technologies arm of the IST ProgrammeProject funded by the Future and Emerging Technologies arm of the IST ProgrammePresence Research InitiativePresence Research Initiative

Grasping: morphological Grasping: morphological computationcomputation

Robot hand with: - elastic tendons- soft finger tips(developed by Hiroshi Yokoi,AI Lab, Univ. of Zurichand Univ. of Tokyo)

Result:- control of grasping- simple “close”- details: taken care of by morphology/materials

Page 10: ADAPT IST-2001-37173

Project funded by the Future and Emerging Technologies arm of the IST ProgrammeProject funded by the Future and Emerging Technologies arm of the IST ProgrammePresence Research InitiativePresence Research Initiative

VideoVideo

Page 11: ADAPT IST-2001-37173

Project funded by the Future and Emerging Technologies arm of the IST ProgrammeProject funded by the Future and Emerging Technologies arm of the IST ProgrammePresence Research InitiativePresence Research Initiative

……how can the robot grasp an unknown object ?how can the robot grasp an unknown object ?

Use a simple motor synergy to flex the fingers and Use a simple motor synergy to flex the fingers and close the handclose the hand

Exploit the intrinsic elasticity of the hand; the Exploit the intrinsic elasticity of the hand; the fingers bend and adapt to the shape of the objectfingers bend and adapt to the shape of the object

Page 12: ADAPT IST-2001-37173

Project funded by the Future and Emerging Technologies arm of the IST ProgrammeProject funded by the Future and Emerging Technologies arm of the IST ProgrammePresence Research InitiativePresence Research Initiative

Result of clusteringResult of clustering 2D Self Organizing Map (100 neurons) Input: proprioception (hand posture, touch

sensors were not used)The SOM forms 7 classes (6 for the objects plus 1 for the no-object condition)

0 5 10 150

5

10

15

units

units

Page 13: ADAPT IST-2001-37173

Project funded by the Future and Emerging Technologies arm of the IST ProgrammeProject funded by the Future and Emerging Technologies arm of the IST ProgrammePresence Research InitiativePresence Research Initiative

Example: learning visual featuresExample: learning visual features

Only one modality (non-overlapping areas of visual field) guide feature extraction of each other

Learn invariant features from spatial context (it is well known that temporal context can be used for learning these features)

Page 14: ADAPT IST-2001-37173

Project funded by the Future and Emerging Technologies arm of the IST ProgrammeProject funded by the Future and Emerging Technologies arm of the IST ProgrammePresence Research InitiativePresence Research Initiative

Page 15: ADAPT IST-2001-37173

Project funded by the Future and Emerging Technologies arm of the IST ProgrammeProject funded by the Future and Emerging Technologies arm of the IST ProgrammePresence Research InitiativePresence Research Initiative

Future workFuture work Continue and complete ongoing experiments

Experiment on affordant vs. non-affordant use of objects (CNRS, UGDIST)

Investigation on cross-modal transfer in newborn infants (CNRS)

Experiments on the robot (UGDIST, UNIZH)– Learning affordances– Learning visuo-motor features by unsupervised

learning Feature extraction on videos showing

mother-infant interaction

Page 16: ADAPT IST-2001-37173

Project funded by the Future and Emerging Technologies arm of the IST ProgrammeProject funded by the Future and Emerging Technologies arm of the IST ProgrammePresence Research InitiativePresence Research Initiative

Epirob04Epirob04Genoa – August 25-27, 2004

http://www.epigenetic-robotics.org

Invited speakers:

Luciano Fadiga Dept. of Biomedical Sciences, University of Ferrara, Italy

Claes von HofstenDept. of Psychology, University of Upssala, Sweden

Jürgen KonczakHuman Sensorimotor Control Lab, University of Minnesota, USA

Jacqueline Nadel CNRS, University Pierre & Marie Curie, Paris, France