10
A proposed Wizard of OZ architecture for a Human-Robot Collaborative drawing task David Hinwood 1[0000-0002-7530-2818] , James Ireland 1[0000-0003-4032-2672] , Elizabeth Ann Jochum 2[0000-0003-2570-1906] , and Damith Herath 1[0000-0002-7509-5265] 1 University of Canberra, Canberra ACT 2617, Australia https://www.canberra.edu.au/about-uc/faculties/SciTech 2 University of Aalborg, 9220 Aalborg, Denmark https://www.en.aau.dk/ [email protected] Abstract. Researching human-robot interaction “in-the-wild” can some- times require insight from different fields. Experiments that involve col- laborative tasks are valuable opportunities for studying HRI and devel- oping new tools. The following describes a framework for an “in the wild” experiment situated in a public museum that involved a Wizard of OZ (WOZ) controlled robot. The UR10 is a non-humanoid collabora- tive robot arm and was programmed to engage in a collaborative drawing task. The purpose of this study was to evaluate how movement by a non- humanoid robot could affect participant experience. While the current framework is designed for this particular task, the control architecture could be built upon to provide a base for various collaborative studies. Keywords: Control architecture · Wizard of OZ · ROS · Non-anthropomorphic Robot · Human Robot Interaction · Artistic collaboration 1 Introduction Human Robot Interaction (HRI) involves the study of how humans perceive, react and engage with robots in a variety of environments. Within the field of HRI is the study of how humans and machines can collaborate on shared tasks, commonly referred to as Human Robot Collaboration (HRC) [3]. Robotic interaction/collaboration can be beneficial in many fields including healthcare ( [11], [12]), education( [4], [25]), construction ( [1], [23]) and the arts ( [9], [10]). As both HRI and HRC are multidisciplinary fields, they often require a collec- tion of individuals with different skill sets ( [8], [10]) The following is a proposed software framework that enables researchers to run a HRC study with minimal development time. In this particular application, the software is implemented in a HRC study [5] with the deliverable being a collaboratively drawn art piece. This architecture is based around the Wizard of OZ (WOZ) experimental design methodology [21], allowing an operator to control the robot’s behaviour in real time. The outcome of this project being

A proposed Wizard of OZ architecture for a Human …...A proposed Wizard of OZ architecture 3 berra, Australia. The public was invited to interact with the UR10 in a collabo-rative

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

A proposed Wizard of OZ architecture for aHuman-Robot Collaborative drawing task

David Hinwood1[0000−0002−7530−2818], James Ireland1[0000−0003−4032−2672],Elizabeth Ann Jochum2[0000−0003−2570−1906], and

Damith Herath1[0000−0002−7509−5265]

1 University of Canberra, Canberra ACT 2617, Australiahttps://www.canberra.edu.au/about-uc/faculties/SciTech

2 University of Aalborg, 9220 Aalborg, Denmarkhttps://www.en.aau.dk/

[email protected]

Abstract. Researching human-robot interaction “in-the-wild” can some-times require insight from different fields. Experiments that involve col-laborative tasks are valuable opportunities for studying HRI and devel-oping new tools. The following describes a framework for an “in thewild” experiment situated in a public museum that involved a Wizardof OZ (WOZ) controlled robot. The UR10 is a non-humanoid collabora-tive robot arm and was programmed to engage in a collaborative drawingtask. The purpose of this study was to evaluate how movement by a non-humanoid robot could affect participant experience. While the currentframework is designed for this particular task, the control architecturecould be built upon to provide a base for various collaborative studies.

Keywords: Control architecture · Wizard of OZ · ROS · Non-anthropomorphicRobot · Human Robot Interaction · Artistic collaboration

1 Introduction

Human Robot Interaction (HRI) involves the study of how humans perceive,react and engage with robots in a variety of environments. Within the fieldof HRI is the study of how humans and machines can collaborate on sharedtasks, commonly referred to as Human Robot Collaboration (HRC) [3]. Roboticinteraction/collaboration can be beneficial in many fields including healthcare( [11], [12]), education( [4], [25]), construction ( [1], [23]) and the arts ( [9], [10]).As both HRI and HRC are multidisciplinary fields, they often require a collec-tion of individuals with different skill sets ( [8], [10])

The following is a proposed software framework that enables researchers to runa HRC study with minimal development time. In this particular application,the software is implemented in a HRC study [5] with the deliverable being acollaboratively drawn art piece. This architecture is based around the Wizardof OZ (WOZ) experimental design methodology [21], allowing an operator tocontrol the robot’s behaviour in real time. The outcome of this project being

2 David Hinwood, James Ireland, Elizabeth Ann Jochum, andDamith Herath

a framework that provides non-technical individuals a means to conduct HRCresearch with minimal software development.

The wizard of OZ control method is a widely accepted technique for exper-iments evaluating human response to robotic behaviour, especially when therobot is required to demonstrate a certain level of intelligence. Consistent be-haviour is crucial to maintain uniformity in experience across all participants. Anexample of a WOZ system includes the work done by Lu and Smart [13] in whicha GUI was built to control a robot and receive feedback from its current activity.

A similar system created by Villano et al [24] had the capability to assist thera-pists while conducting sessions with children afflicted by ASD (Autism SpectrumDisorder). Much like Kim et al [11] whom also used WOZ to control a robot thatacted as a partner for a therapist. Additionally WOZ systems are useful whenexamining a new unit or agent which was the direction taken by studies suchas Maulsby et al [15], Green et al [7] or Shiomi et al [20] where data couldbe collected and provide feedback from real world experiments. However, it isimportant to recall that the operator’s decisions were generally based on somepredetermined algorithm before deployment and this could potentially impactthe collected data.

Other facets of this research experiment involve anthropomorphic movementand the physical motion of drawing. Previous works such as Tresset and Ley-marie [22] along with Munoz et al [16] describe robots able to create artworkfrom real images. There are issues with noise and slight accuracy deficiencieswhen it comes to the translation of image coordinates to endpoints of roboticunits. In our case, we emphasized easy recognition of basic objects and imagesthat participants could readily contribute to regardless of drawing ability.

Anthropomorphic movements have been a significant focus of robotic studiesfor several applications, including psychology and social sciences ( [2], [19]). Themajority of this research focuses on the likeability and human reaction to a robotbased on gestures and other behaviours. Behind many of these projects such asBarntneck et al [2] or Salem et al [19], anthropomorphism was analysed througha WOZ implementation. While there have been multiple instances of roboticdrawing experiments and social collaborative robots controlled via WOZ, therehas been little overlap between these two topics. As this architecture was de-signed for an experiment involving anthropomorphic behaviour, it was decidedthat a WOZ approach best gave the non-technical team members sufficient con-trol over the UR10.

The experiment described below was designed for an “in the wild” HRC inves-tigation developed in collaboration with humanities researchers. The location forthis study was the Questacon - National Science and Technology Centre in Can-

A proposed Wizard of OZ architecture 3

berra, Australia. The public was invited to interact with the UR10 in a collabo-rative drawing task. In which an individual would sit at the table in a positionthat would enable both participants (human and robot) to physically interact.

Fig. 1: Experiment setup, participantleft, WOZ operator right

The UR10, under direction of theWOZ operator, would lead the inter-action by prompting a participant tocomplete various actions through non-verbal cues. The UR10 would firstgreet the participant upon meetingand wait for them to take a seat. Therobot would then begin to render adrawing, pausing momentarily to ob-serve its progress. The UR10 under thedirection of the WOZ operator wouldprompt the user to pick up a pen tocontribute to the unfinished artwork.After several rounds of turn-taking,

the robot drew a line in the lower right corner of the canvas to autograph thedrawing to signal completion of the task. Once signed by both parties, the UR10would indicate the participant was free to take the drawing home.

2 Architecture Overview

The applied experiment [5] required the UR10 to give the appearance of anindependent social entity while having the capability to both draw and reactto external stimuli. This section describes the software and hardware platformsutilised in this implementation including the Robot Operating System (ROS)module descriptions, the UR10 control method and the motion planning details.

2.1 Robot Operating System

Robot Operating System (ROS) is a popular open source middle-ware for thedevelopment of robotic applications [18]. Benefits of ROS include integrated com-munication between independent software modules (referred in ROS as ‘nodes’),a diverse series of dynamic libraries and open source tools to assist development.The following implementation runs on a Linux platform and is compatible withboth the Kinetic Kame and Indigo Igloo distributions of ROS. ROS is the pri-mary source of communication between different software and hardware modules.The nodes communicate via pairs of publishers and subscribers sharing a specifictopic, represented as one-way arrows in Figure 2. There are three ROS nodes run-ning simultaneously that make up the framework’s core. They are the interrupt,social command and robot interface nodes. The social command node containstwo publishers with subscribers in the robot interface node. The WOZ commandpublisher sent the majority of commands across including all social actions, draw-ing routines and parameter calibrations, while the override command was used

4 David Hinwood, James Ireland, Elizabeth Ann Jochum, andDamith Herath

to disable any running actions if needed. The robot interface node returned val-ues to the social command node indicating what state the robot was operating invia the robot feedback publisher. Any messages received from the robot feedbackpublisher were displayed back to the user.

Fig. 2: Topic overview for ROS communication

The interrupt node was created as a safety feature that allowed the WOZ op-erator to send a signal that caused the robot to withdraw to a safe distancein the event of a potential collision event with a participant. Currently, this ismanually triggered by the WOZ operator; however the same signal can be sentby a computer vision node that automatically triggers the robot to withdraw ifthe participant comes within a given distance. Following a withdraw motion; theUR10 would attempt to execute the last task, be it drawing or gesturing to theparticipant.

2.2 The UR10

The UR10 is an industrial robot arm with 7 degrees of freedom and a 1.3m max-imum reach. To achieve communication between the proposed framework andthe UR10, the python-urx [14] library was implemented to send commands viaa TCP connection via a remote call procedure (RCP). This meant that contentneeded to contain executable code written in URScript, a language specificallydesigned to interface with the UR3, UR5 and UR10 robots.

The motions of the UR10 can be executed as a single linear trajectory or amovement between a series of points creating a smooth, fluid motion. Both mo-tions are based around the end-effectors position relative to the world coordi-nate, where the rotational values of the arm calculated via the UR10’s inversekinematics system. There are several constraints when it came to movement ofthe UR10. Within a single motion the robot arm requires a certain minimumdistance to accelerate and then decelerate. The increase while accelerating isconstant until either a defined maximum velocity is reached or the UR10 needsto begin decelerating so the arm can arrive at its intended target endpoint. Thisconstraint could cause a program malfunction when the robot’s speed was toofast to accurately move between two points or the start and target endpointswere too close together. This limitation presented an issue when planning the

A proposed Wizard of OZ architecture 5

drawing actions. As sketching is a dexterous task, being able to accurately drawdetailed lines was a priority. To overcome this constraint, contours that werebeing drawn were separated into a series of points an acceptable distance apart.This solution is expanded upon in the drawing subsection below.

AnimatingThe animations were made to exhibit anthropomorphic behaviour as to establishnon-verbal communication with a participant. The animations were designed us-ing a simple interface that allowed the recording and play back of each individualanimation. To streamline the animation process, the robot was set to free drivemode in which the researcher could freely and securely manipulate the UR10into a desired position. Once in position the researcher would record the end-effectors coordinates. The various coordinates/positions were saved to a CSV(comma separated value) file along with animation titles and required delaysbetween movements. The primary use of the animations was to prompt the par-ticipant to complete a desired action, such as picking up a pen. This and otheranimations are initiated by the WOZ operator from the command-line terminal.Animation speeds differ significantly from the drawing motions. Depending onhow fast the robot was needed to move, a maximum velocity of 50cm/s was ac-ceptable. The acceleration was usually within the range of 10cm/s2 and 20cm/s2.These speed parameters were determined to be the optimal solution based onearly testing phases and initial feedback with test-groups. While performing ananimated movement, the robot did not account for accidental collision betweenits two aluminium tubes and the steel base. The priority was to ensure that allcombinations of animated motions did not result in a collision.

DrawingThe motivation for using a collaborative drawing task was to initiate a collab-oration between robot and participant in an open-ended task that was mutual,involved turn-taking, and was enjoyable for the participants. It was also impor-tant that the interaction fit within a limited time frame. Simple line drawingsproved the most effective for this purpose, as the images were readily recognis-able to the participants and did not require advanced artistic skills. The focuswas on the interaction, rather than aesthetic criteria. When called upon, thecontour extraction method would first apply a binary threshold converting theimage to grayscale. This grayscale image would then be the primary param-eter into the ‘findContours’ function which output a series of points for eachcontour in the original image. To reduce the amount of time needed to renderthe image, the number of points on each contour was reduced by implement-ing the Ramer-Douglas-Peucker [6] algorithm. Both the ‘findContours’ methodand the Ramer-Douglas-Peucker algorithm were implemented through use ofthe OpenCV library [17]. Once extracted each point on a contour was iteratedthrough to check that the distance between points met the hardware require-ments discussed above. In the case where a point was found to be closer thanthis threshold, it was ignored and the next point on the contour was verified.

6 David Hinwood, James Ireland, Elizabeth Ann Jochum, andDamith Herath

Once all of the contours were extracted, this method would pass them along tobe translated, scaled and physically rendered.

Each point provided by the contour extraction method was mapped in turnonto a three-dimensional plane defined by four coordinates set in the calibrationstage. These coordinates were described within the implementation as bottomleft, bottom right, top left and top right area of the canvas as it relates to theobservational position of the robot. Participants were seated opposite, facing therobot, thus the image coordinates where mapped to be drawn from their per-spective, see Figure 1. We presumed that participants would be better able tocollaborate on a drawing when the image was easily recognised and drawn fromtheir perspective. Once translated, the image coordinates needed scaling in rela-tion to the canvas size chosen. First, the image was evaluated based on its ratio ofheight to width, leading to the creation of a drawing space being an area withinthe canvas, sharing the same aforementioned ratio. Unlike the animating speedparameters, the speed of motion while drawing had to be limited. While acceler-ation settings remained consistent with the animating motion, the velocity waslimited to a maximum of 30cm/s to allow for more accurate movements.

3 WOZ Commands

The command system of this implementation was a crucial component for estab-lishing the interface between the WOZ operator and the robot. By placing theWOZ operator away from direct view, participants more readily interacted withthe robot as a social actor. With the operator out of direct view, the illusion ofsocial intelligence was achieved. To support the interaction, the operator woulduse a series of commands to puppeteer the robot’s behaviour depending on thesituation. If asked, the WOZ operator would inform the participant after theinteraction that the robot was manually controlled.

The drawing routine comprised of a series of actions preformed sequentially.The robot would start by greeting the participant, begin drawing, pause to eval-uate its efforts (using a predetermined animation), and complete the finish itsdrawing and return to an idle state. These actions began when a participantbegan an interaction with the robot. This was controlled by the WOZ operatorwho would control the timing, execution and sequence of events. Afterwards, theseries of aforementioned commands could be executed to complete the desiredinteraction.

The commands were separated into the three categories: maintenance, statesand actions. Apart from the maintenance category, all commands generally ap-plied to social actions. Each command was sent via a command-line interfacedeveloped in ROS with Python. Sending instructions via the UR10 applicationprogramming interface (API) from the python-urx [14] library.

A proposed Wizard of OZ architecture 7

Fig. 3: WOZ Command Structure

The maintenance directives set parameters for the experiment. These instruc-tions handled settings such as velocity, acceleration, animations and calibra-tions. The maintenance commands also were able to run individual actions ofthe experiment for testing purposes. These test instructions were built on over-loaded functions that were embedded within the social routine code structure.The UR10 has two motion-based parameters that were adjustable within thepython-urx library, velocity and acceleration.

The calibration routine could be called on start up or user request. This com-mand would place the robot in free-drive mode, allowing the robot to be movedby external force. While calibrating, a WOZ operator would move the robot toeach corner of the canvas recording positions in three-dimensional space relativeto the robot base. From here the lowest corner, along with its neighbours, makeup a three dimensional plane representing the canvas.

The state commands instruct the robot how to behave in a continuous waitingmanner. The WOZ operator could choose between three predetermined stateswith either a nodding motion, an observing animation or a withdrawn pose.These three options of idle states allowed the participant to contribute withinthe interaction between the robot and themselves. These states also assistedthe WOZ operator with control of the exhibit as specific actions could only beexecuted from this idle state. The nodding command would give the robot theappearance of looking at a participant and nodding with random time intervals.The observe command would give an illusion of the robot evaluating both thecanvas and participant via a series of linear endpoint transitions. Finally, thewithdrawn state caused the robot to pull away from the human to a safe posi-tion and remain motionless.

The action commands were the series of motions to non-verbally communicate

8 David Hinwood, James Ireland, Elizabeth Ann Jochum, andDamith Herath

and interact with the participant. Action commands were called when the robotwas in a continuous state of idle motion. These commands are listed in theright column of Figure 3. The ‘prompt pen pick-up’ command could be issuedin multiple scenarios where a participant was required to pick up or place a pen(held in the cup on the table). The purpose of the ‘encourage to draw’ com-mand was to signal the participant to collaborate on the drawing and initiateturn-taking. Through the ‘sign’ instruction the UR10 would autograph the art-work. If necessary, the WOZ operator could re-send the ‘prompt pen pick-up’command to encourage the participant to take back the pen. To focus the par-ticipant’s attention on to the empty space adjacent to the robots signature, theWOZ operator calls the ‘encourage to sign’ command. The conjunction of thesecommands were intended to prompt participants to write their signature nextto the robot’s autograph.

4 Conclusion

Human robot interaction and collaborative research can involve very labour in-tensive integration between the hardware (robot) and study variables. This canlimit certain groups that do not comprise of members with particular skills. Theproposed architecture was designed such that researchers without any program-ming experience could still facilitate a collaborative experiment with minimaltechnical assistance. The framework in question currently centres around an in-teraction in which a participant and a UR10 contribute to a shared artwork.However this can be adapted to test a multitude of social and physical variablesin the future. Here, we have summarised the control architecture for the WOZsetup. The results of the experiments with participants, including analysis anddiscussion, are summarised in [5]. A complete analysis of the experimental datais forthcoming.

5 Future Direction

The current framework is hard coded to a fixed set of commands and one col-laborative task. Future development will investigate how to make this architec-ture more flexible and allow for different collaborative tasks to be integrated.One contribution of control architecture is that in enables researchers with littletechnical knowledge to add, remove and execute animations with greater control.At the present, the WOZ commands are terminal-based therefore certain tasksbecome more convoluted. For this reason, a graphical user interface (GUI) willbe added to give the WOZ application a more visually appealing appearance andstreamlined functionality. Other features could include forms of data logging andrecording to make the experiment evaluations easier to monitor.

A proposed Wizard of OZ architecture 9

6 Acknowledgements

The study was conducted with ethical approval by the Human Research EthicsCommittee of the University of Canberra (HREC 20180158). In collaborationwith the University of Aalborg, namely Jonas Elbler Pedersen and KristofferWulff Christensen whom we would like to thank. This work would not havebeen possible without the Innovation Vouchers Program jointly funded by theACT Australia Government, University of Canberra and Robological PTY LTD.

References

1. R. S. Andersen, S. Bgh, T. B. Moeslund, and O. Madsen. Task space hri for coop-erative mobile robots in fit-out operations inside ship superstructures. In 2016 25thIEEE International Symposium on Robot and Human Interactive Communication(RO-MAN), pages 880–887.

2. Christoph Bartneck, Dana Kuli, Elizabeth Croft, and Susana Zoghbi. Measurementinstruments for the anthropomorphism, animacy, likeability, perceived intelligence,and perceived safety of robots. International journal of social robotics, 1(1):71–81,2009.

3. Andrea Bauer, Dirk Wollherr, and Martin Buss. Human–robot collaboration: asurvey. International Journal of Humanoid Robotics, 5(01):47–66, 2008.

4. Paul Baxter, Emily Ashurst, Robin Read, James Kennedy, and Tony Belpaeme.Robot education peers in a situated primary school study: Personalisation pro-motes child learning. PloS one, 12(5):e0178126, 2017.

5. Kristoffer W. Christensen, Jonas E. Pedersen, Elizabeth A. Jochum, and DamithHerath. The truth is out there: Capturing the complexities of human robot-interactions. In Workshop on Critical Robotics - Exploring a New Paradigm,NordiCHI’18 Nordic Conference on Human-Computer Interaction (2018), Oslo,Norway.

6. David H Douglas and Thomas K Peucker. Algorithms for the reduction of thenumber of points required to represent a digitized line or its caricature. Cartograph-ica: The International Journal for Geographic Information and Geovisualization,10(2):112,122, 197312.

7. Anders Green, Helge Huttenrauch, and K Severinson Eklundh. Applying thewizard-of-oz framework to cooperative service discovery and configuration. InRobot and Human Interactive Communication, 2004. ROMAN 2004. 13th IEEEInternational Workshop on, pages 575–580. IEEE.

8. D. Herath, E. Jochum, and E. Vlachos. An experimental study of embodied in-teraction and human perception of social presence for interactive robots in publicsettings. IEEE Transactions on Cognitive and Developmental Systems, 2018.

9. D. C. Herath, C. Kroos, C. J. Stevens, L. Cavedon, and P. Premaratne. Thinkinghead: Towards human centred robotics. In 2010 11th International Conference onControl Automation Robotics & Vision, pages 2042–2047.

10. Damith Herath, Christian Kroos, and Stelarc Stelarc. Robots and Art: Exploringan Unlikely Symbiosis. Cognitive Science and Technology. Springer Singapore,Singapore, 2016.

11. Elizabeth S Kim, Lauren D Berkovits, Emily P Bernier, Dan Leyzberg, FrederickShic, Rhea Paul, and Brian Scassellati. Social robots as embedded reinforcersof social behavior in children with autism. Journal of autism and developmentaldisorders, 43(5):1038–1049, 2013.

10 David Hinwood, James Ireland, Elizabeth Ann Jochum, andDamith Herath

12. Saso Koceski and Natasa Koceska. Evaluation of an assistive telepresence robotfor elderly healthcare. Journal of medical systems, 40(5):121, 2016.

13. David V Lu and William D Smart. Polonius: A wizard of oz interface for hriexperiments. In Proceedings of the 6th international conference on Human-robotinteraction, pages 197–198. ACM.

14. Sintef Raufoss Manufacturing. Github - sintefraufossmanufacturing/python-urx:Python library to control a robot from ’universal robots’. https://github.com/SintefRaufossManufacturing/python-urx, July 2018.

15. David Maulsby, Saul Greenberg, and Richard Mander. Prototyping an intelligentagent through wizard of oz. In Proceedings of the INTERACT’93 and CHI’93conference on Human factors in computing systems, pages 277–284. ACM.

16. Jose-Maria Munoz, Jose Avalos, and Oscar E Ramos. Image-driven drawing systemby a nao robot. In Electronic Congress (E-CON UNI), 2017, pages 1–4. IEEE.

17. OpenCV. Structural analysis and shape descriptors opencv 2.4.13.7 documenta-tion. https://docs.opencv.org/2.4/modules/imgproc/doc/structural analysis andshape descriptors.html, July 2018.

18. Morgan Quigley, Ken Conley, Brian Gerkey, Josh Faust, Tully Foote, Jeremy Leibs,Rob Wheeler, and Andrew Y Ng. Ros: an open-source robot operating system. InICRA workshop on open source software, volume 3, page 5. Kobe, Japan, 2009.

19. Maha Salem, Friederike Eyssel, Katharina Rohlfing, Stefan Kopp, and Frank Jou-blin. To err is human (-like): Effects of robot gesture on perceived anthropomor-phism and likability. International Journal of Social Robotics, 5(3):313–323, 2013.

20. Masahiro Shiomi, Takayuki Kanda, Satoshi Koizumi, Hiroshi Ishiguro, and Nori-hiro Hagita. Group attention control for communication robots with wizard of ozapproach. In Proceedings of the ACM/IEEE international Conference on Human-Robot interaction, pages 121–128. ACM.

21. Robert J Torres, Michael P Heck, James R Rudd, and John F Kelley. Usabilityengineering: A consultant’s view of best practices and proven results. Ergonomicsin Design, 16(2):18–23, 2008.

22. Patrick Tresset and Frederic Fol Leymarie. Portrait drawing by paul the robot.Computers & Graphics, 37(5):348–363, 2013.

23. Lauren Vasey, Long Nguyen, Tovi Grossman, Heather Kerrick, Tobias Schwinn,David Benjamin, Maurice Conti, and Achim Menges. Human and robot collab-oration enabling the fabrication and assembly of a filament-wound structure. InACADIA//2016: Posthuman Frontiers: Data, Designers, and Cognitive Machines,Proceedings of the 36th Annual Conference of the Association for Computer AidedDesign in Architecture, pages 184–195.

24. Michael Villano, Charles R Crowell, Kristin Wier, Karen Tang, Brynn Thomas,Nicole Shea, Lauren M Schmitt, and Joshua J Diehl. Domer: a wizard of ozinterface for using interactive robots to scaffold social skills for children with autismspectrum disorders. In Proceedings of the 6th international conference on Human-robot interaction, pages 279–280. ACM.

25. Q.S.M. Zia-Ul-Haque, Q.S.M. Zhiliang Wang, Q.S.M. Cunyang Li, Q.S.M.Juan Wang, and Q.S.M. Yujun. A robot that learns and teaches english languageto native chinese children. pages 1087,1092. IEEE, 2007.