Upload
others
View
2
Download
0
Embed Size (px)
Citation preview
BENJAMIN COHEN-LHYVERPhD in Robotics & Artificial Intelligence
EXPERIENCE & EDUCATION
PHD STUDENT • RESPONSIBLE for the creation, conceptualization, formalization and implementation of a robot’s cognition abilities within the European FET FP7 TWO!EARS project (www.twoears.eu/project)CREATOR, DESIGNER & DEVELOPER of the whole Head Turning Modulation model (HTM, see appendix). During this 3-year long project, I proposed an innovative model of audiovisual attention in a robot endowed with head movements, together with its integration in the robot and its evaluation. I was also in charge with progress reports of the lab, deliverables, and scientific communication. The development of the HTM involved several programming languages, such as Matlab (choice of the TWO!EARS consortium), C++, Bash, ROS, and Genom3.
2014-2018 • INSTITUTE OF INTELLIGENT SYSTEMS AND ROBOTICS (UMR 7222)
Education
2013-2014 • COLLÈGE DE FRANCE (IN UMR 7152)DATA SCIENTIST + DESIGNER & DEVELOPER of OLCA (Online LFP & CSD Analyzer), a MATLAB toolbox that provides in real-time a statistical analysis of electrophysiological data together with a powerful, interactive and user-friendly graphical representation of the data. During this year, I was both on the scientific side of the project, by proposing a relevant approach to the analysis of our electrophysiological data, and on the development side, by developing a software that would directly implement the said approach while offering the scientist an easy-to-use tool to work on and visualize these data. I also had to respect the choice of the team about the exclusive use of Matlab, thus teaching me to adapt to strong constraints.
2012 • ECOLE NORMALE SUPÉRIEURE OF PARIS (IN INSERM U960)DATA SCIENTIST for neuroimaging data together with the conception and the running of psychoacoustics experiments in humans. Similarly to my previous internship, the months I spent at the ENS brought me closer to psychoacoustics and computational neuroscience.
DATA SCIENTIST for electrophysiological auditory data. First experience in a research lab where I also first learned to code (a couple of years before I started the Master in Bioinformatics), this internship strongly confirmed my desire to combine brain-related science and programming.
2017 • PhD degree
In parallel to working on my thesis, I have been selected for an Assistant Professor position at my university, during my last year. I was in charge with teaching to Licence and Masters students mainly in Informatics (total of 196 hours). This experience, following the one I had as a music teacher for years, allowed me to challenge my knowledge and to make my students as passionate as possible about fields that sometimes can seem boring.
2013 • Masters in Bioinformatics
Modules included: Programming (Python, C++, C, Java), Modelization, Computational Neurosciences, Bioinformatics, Mathematics, Proteomics, Genomics. Ranked 1st/18 in semester 4 (6-month internship evaluation through dissertation and oral presentation); 4th/18 on the semester 3 (no ranking on semesters 1 & 2). In addition, I was a home teacher for 20+ students for a total of ~35 hours of work.
2011 • Licence in Biology, Chemistry, Physics and Biochemistry
Modules included: Neurosciences, Immunology, Physics, Mathematics, Developmental Biology, Biochemistry, Organic Chemistry, History & Philosophy of Sciences.
UNIVERSITY PARIS DIDEROT (PARIS VII)
UNIVERSITY PIERRE AND MARIE CURIE (PARIS VI)
UNIVERSITY PARIS DESCARTES (PARIS V)
2011 • CENTER FOR NEUROSCIENCES OF PARIS-SUD (UMR 9197)
Personal info
Address6, passage Gauthier 75019 Paris France
Phone+33663376460
Websitewww.cohenlhyver.com
LinkedIn@cohenlhyver
Skills overview
ProgrammingMatlab Python Keras, TF C++ ROS Genom3 GIT
7 years 5 years 2 years 3 years 4 years 4 years 5 years
Machine LearningUnsupervised Deep-Learning Reinforcement
5 years 2 years 5 years
RoboticsIntegration Hardware Testing
5 years 2 years 2 years
CommunicationKeynotes Conferences Teaching Scientific Reports Technical Reports
7 years 4 years 10+ years 6 years 4 years
LanguagesFrench English German Spanish
native speaker fluent basic basic
page 1/6
BENJAMIN COHEN-LHYVERPhD in Robotics & Artificial Intelligence
PHD THESIS description
Personal info
Address6, passage Gauthier 75019 Paris France
Phone+33663376460
Websitewww.cohenlhyver.com
LinkedIn@cohenlhyver
ATTENTIONAL BEHAVIORS are what makes us react to a very wide range of perceptual events, from the sound of a glass unexpectedly falling then breaking on the floor to the sudden recognition of a friend of ours in a crowded street. One of the attentional reactions involve head movements in order to bring our visual sensor in front of the event of interest. This overt mechanism implies gathering additional perceptual information by vision that is, first, useful to disambiguate unclear situations, and second, processed faster and more precisely by the dedicated cortices. In conjunction with the emergence of ‘multimodal object’ in the brain, attention thus becomes a powerful and omnipresent mechanism for understanding our complex world. Whereas robotic exploration has often been focused on the topological aspect of an environment (room size, obstacles, usable paths, etc.), algorithms are now robust and reliable enough to bring exploration to a higher level: a more semantic exploration of the world. This is precisely where my model takes place. The Head Turning Modulation model (HTM) I conceptualized, formalized, implemented and integrated in our robot, is an innovative computational model of attention that provides a humanoid robot with the ability of determining exclusively by itself the most important audiovisual objects during its exploration of unknown environments. Composed of two modules, the Dynamic Weighting (DW) one, and the Multimodal Fusion & Inference one (MFI), the overall model has been evaluated in complex simulated environments and in the real robotic platform in real unknown environments. Through the DW module, the robot becomes able to create, in real-time and without prior knowledge, its own set of behavior rules that are, first, computed environment by environment (implying that an object can be marked as important in one environment but completely not worth its attention in another one), and second, updated with respect to what the robot perceives all along its exploration. Moreover, the robot is now able to transfer part of the knowledge it created in one environment to another, whenever it detects that the new environment it currently explores seems semantically identical to another one it already explored. Additionally, the HTM model embeds a second attentional layer through the MFI module which implements an online self-supervised active learning paradigm. This learning part of the model is dedicated to the creation of the robot’s audiovisual objects database by learning the association between the audio and visual information (but expandable to additional sources of information) it has encountered so far. This learning will allow the robot to both infer a potential missing modality, as when an object is placed behind the robot, and also to clean the input data on which it relies, for they obviously include classification errors. The result we obtained both in simulated environments and in real ones showed a significant improvement of the input data the HTM receives (183.6% better) together with a relevant attentional behavior in unpredictable environments.
One of the two robots I have been working on, named Odi (located at ISIR, Paris, France), which is endowed with binocular vision, binaural hearing, a mobile base, and head rotation. The whole HTM model has been integrated then evaluated in this platform, using the data brought by the TWO!EARS software.
How to bring attentional capabilities to a robot exploring unknown environments, and without any
prior knowledge about them?“ ”
page 2/6
BENJAMIN COHEN-LHYVERPhD in Robotics & Artificial Intelligence
PUBLICATIONS
Personal info
Address6, passage Gauthier 75019 Paris France
Phone+33663376460
Websitewww.cohenlhyver.com
LinkedIn@cohenlhyver
2016in International Congress for Acoustics (Conference)
Buenos Aires, Argentina
2015
in IEEE - RObotics and BIOmimetics (ROBIO, Conference) Zhuhai, China
2014in EAA - 7th Forum Acusticum (Conference)
Krakow, Poland
WALTHER T., AND COHEN-LHYVER B.
Multimodal fusion and inference using binaural audition and vision
In this conference paper, we introduce the Multimodal Fusion & Inference algorithm (part of the HTM model), an online self-supervised active learning paradigm that enables a robot to learn in unknown environments the relationship between the audio and visual data that it perceives in order to create its own internal multimodal-object-based representation of the world.
Modulating the auditory turn-to reflex on the basis of multimodal feedback loops: the Dynamic Weighting model
Here, we present the Dynamic Weighting algorithm (DW, also part of the HTM model), which is the attentional part of the whole model. Through the notion of Congruence of an audiovisual event occurring in a given environment, a notion defined as semantic saliency, the DW enables the robot to take a decision about whether this event is important or not. A head movement is thus triggered whenever the event is worth focusing on.
Multimodal feedback in auditory-based active scene exploration
In this conference, as an invited paper, me and one of the TWO!EARS partners present a computational framework in which some of the feedback loops that have been identified in the project as relevant were implemented. A environment simulator with a moveable robot and audiovisual sources was also proposed.
COHEN-LHYVER B., ARGENTIERI, S., AND GAS, B.
COHEN-LHYVER B., ARGENTIERI, S., AND GAS, B.
Chapter of the book edited by Pr. Em. Jens Blauert (Berlin, Germany) & Pr. Jonas Braasch (New York, USA). Publisher: Springer Nature
In this book chapter that follows the Technology of Binaural Listening (published in 2013 and edited by Pr. Jens Blauert), I address the question of how much audition is used by humans to trigger head movements, together with the potential implications for designing human-like robots. In particular, I offer to the reader insights about three fundamentals cerebral phenomena (the Reverse Hierarchy Theory, the Mismatch Negativity, and audio and visual saliency), and a description of the Superior Colliculus, a major brain organ involved in multimodal integration and motor actions triggering. I also present my HTM model as an illustration of how these principles can be applied to an artificial agent.
Using Audition as a Trigger for Attentional Head Movements in Multimodal Environments
COHEN-LHYVER B., ARGENTIERI, S., AND GAS, B.in The Technology of Binaural Understanding (Book, under review)
2018
just published
!!!Research Topic: Intrinsically Motivated Open-Ended Learning in Autonomous Robots
in Frontiers in Neurorobotics (Journal)
The Head Turning Modulation system: an active multimodal paradigm for intrinsically motivated exploration of unknown environments
COHEN-LHYVER B., ARGENTIERI, S., AND GAS, B.
In this journal article, we present a condensed version of the HTM model and its two constitutive modules: the Multimodal Fusion & Inference module, and the Dynamic Weighting module. The first is an online self-supervised active learning paradigm while the second is the attentional component of the model (see thesis summary in this document).
IF 2.606 “world’s most
cited Neurosciences
journals”
Additional skills
MusicPianist & guitarist Composer Producer Sound Creator (plays and movies)
PhotographyTravel & street photo Shootings for actors Shootings for music bands
VideoDirection Editing
SportClimbing (outside and in gyms) Sailing (former teacher) Basket Ball Tennis (competition) Skydiving
page 3/6
BENJAMIN COHEN-LHYVERPhD in Robotics & Artificial Intelligence
TWO!EARS PROJECT description
Personal info
Address6, passage Gauthier 75019 Paris France
Phone+33663376460
Websitewww.cohenlhyver.com
LinkedIn@cohenlhyver
THE TWO!EARS PROJECT is a computational framework for modelling active exploratory listening that assigns meaning to auditory scenes. It has received funding from the European Union’s Seventh Framework Programme for research, technological development and demonstration under grant agreement n°618075. Understanding auditory perception and cognitive processes involved with our interaction with the world are of high relevance for a vast variety of ICT systems and applications. Human beings do not react according to what they perceive, but rather, they react on the grounds of what the percepts mean to them in their current action-specific, emotional and cognitive situation. Thus, while many models that mimic the signal processing involved in human visual and auditory processing have been proposed, these models cannot predict the experience and reactions of human users. The model we aim to develop in the TWO!EARS project will incorporate both signal-driven (bottom-up), and hypothesis-driven (top-down) processing.
Reading the world with Two!Ears“ ”
Website of the projecttwoears.eu/
Full abstract ontwoears.eu/project
Key featuresHuman-centeredness Toolbox of evaluated modules Active Hearing Cross-modal Integration Meaning extraction Meaning Awareness Exploration Top-down Adaptation Public model software Audiovisual scene database
List of publicationstwoears.eu/publications/
parts of the project I have been involved in
GitHubgithub.com/TWOEARS/TwoEars
page 4/6
BENJAMIN COHEN-LHYVERPhD in Robotics & Artificial Intelligence
THESIS & THE TWO!EARS PROJECT THE CONSORTIUM
Personal info
Address6, passage Gauthier 75019 Paris France
Phone+33663376460
Websitewww.cohenlhyver.com
LinkedIn@cohenlhyver
• Audio Visual Technology Group — Technological University of Ilmenau (Germany)ALEXANDER RAAKE (Project leader), Hagen Wierstorf.
• Neural Information Processing Group — Technical University of Berlin (Germany)Klaus Obermayer, Ivo Trowitzsch, Johannes Mohr, Youssef Kashef.
• Department of Electrical Engineering-Hearing Systems — Technical University of DenmarkTorsten Dau, Tobias May.
• Institute of Communication Acoustics — Ruhr University (Bochum, Germany) Jens Blauert, Dorothea Kolossa, Thomas Walther, Cristopher Schymura.
• Institute of Intelligent Systems and Robotics — University Pierre and Marie Curie (Paris, France)Bruno Gas, Sylvain Argentieri, Benjamin Cohen-Lhyver.
• Robotics, Action and Perception Group — Laboratory of Architecture and Analysis of Systems (LAAS) (Toulouse, France)
Patrick Danès, Ariel Podlubne, Thomas Forgue.
• Institute of Communications Engineering — University of Rostock (Germany)Sascha Spors, Fiete Winter.
• Department of Computer Science — University of Sheffield (Great Britain)Guy Brown, Ning Ma.
• Human-Technology Interaction Group — Technological University of Eindhoven (The Netherlands)Armin Kohlrausch, Ryan Chungeun Kim.
European Labs
American Partner• The Center for Cognition, Communication, and Culture — Rensselaer Polytechnic Institute (Troy, New York, USA)
Jonas Braasch.
page 5/6
Prof. Dr.-Ing. Dr. Tech. h. c. Emeritus Jens Blauert• Prof. (em.), Acoustics and EE, Ruhr-Universität Bochum• Prof. (adj.), Architectural Acoustics, RPI, TROY NY• Institute of Communication Acoustics,• Ruhr-Universität Bochum, D-44780 BOCHUM, Germany• [email protected]• +49 234 322 2496 (office)
Prof. Dr.-Ing. Alexander Raake• Head of Audiovisual Technology Group• Institute for Media Technology• Helmholtzplatz 2, 98693 Ilmenau, Germany• [email protected]• +49 3677 69-2757
Prof. Patrick Danès• Laboratory of Analysis and Architecture of Systems (Laboratoire d'Architecture et d'Analyse desSystèmes, LAAS)
• National Center of Scientific Research (Centre National de la Recherche Scientifique, CNRS)• University Toulouse III Paul Sabatier• 7, avenue du Colonel Roche, BP 54-200, F-31031 Toulouse Cedex 4, France• [email protected]• +33 5 61 33 78 25
Prof. Chantal Milleret• Center for Interdisciplinary Research in Biology (CIRB)• Collège de France• 11, place Marcellin Berthelot, 75231 Paris Cedex 05, France• [email protected]
Prof. Bruno Gas (supervisor)• Institute of Intelligent Systems and Robotics (ISIR)• Team Interaction• University Pierre and Marie Curie (UPMC)• Head of the Department ‘Master of Engineer Sciences’ of UPMC• Pyramid — T55-65 • CC 173 — 4, place Jussieu, 75005 Paris, France• +33 1 44 27 28 75• [email protected]
Associate Prof. Sylvain Argentieri (co-supervisor)• Institute of Intelligent Systems and Robotics (ISIR)• Team Interaction• University Pierre and Marie Curie (UPMC)• Pyramid — T55-65 • CC 173 — 4, place Jussieu, 75005 Paris, France• +33 1 44 27 63 55• [email protected]
BENJAMIN COHEN-LHYVERPhD in Robotics & Artificial Intelligence
REFERENCES
page 6/6