6
See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/228783215 Personal space-based modeling of relationships between people for new human- computer interaction Article CITATION 1 READS 16 4 authors: Toshitaka Amaoka Meisei University 8 PUBLICATIONS 91 CITATIONS SEE PROFILE Hamid Laga University of South Australia 63 PUBLICATIONS 355 CITATIONS SEE PROFILE Suguru Saito Tokyo Institute of Technology 73 PUBLICATIONS 416 CITATIONS SEE PROFILE Masayuki Nakajima Uppsala University 193 PUBLICATIONS 990 CITATIONS SEE PROFILE All in-text references underlined in blue are linked to publications on ResearchGate, letting you access and read them immediately. Available from: Hamid Laga Retrieved on: 29 September 2016

Personal space-based modeling of relationships between ...€¦ · Examples of such input devices include the Camera mouse [1] and voice commands in home envi-ronments [8]. There

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Personal space-based modeling of relationships between ...€¦ · Examples of such input devices include the Camera mouse [1] and voice commands in home envi-ronments [8]. There

Seediscussions,stats,andauthorprofilesforthispublicationat:https://www.researchgate.net/publication/228783215

Personalspace-basedmodelingofrelationshipsbetweenpeoplefornewhuman-computerinteraction

Article

CITATION

1

READS

16

4authors:

ToshitakaAmaoka

MeiseiUniversity

8PUBLICATIONS91CITATIONS

SEEPROFILE

HamidLaga

UniversityofSouthAustralia

63PUBLICATIONS355CITATIONS

SEEPROFILE

SuguruSaito

TokyoInstituteofTechnology

73PUBLICATIONS416CITATIONS

SEEPROFILE

MasayukiNakajima

UppsalaUniversity

193PUBLICATIONS990CITATIONS

SEEPROFILE

Allin-textreferencesunderlinedinbluearelinkedtopublicationsonResearchGate,

lettingyouaccessandreadthemimmediately.

Availablefrom:HamidLaga

Retrievedon:29September2016

Page 2: Personal space-based modeling of relationships between ...€¦ · Examples of such input devices include the Camera mouse [1] and voice commands in home envi-ronments [8]. There

PERSONAL SPACE-BASED MODELING OF RELATIONSHIPSBETWEEN PEOPLE FOR NEW HUMAN-COMPUTER INTERACTION

Toshitaka Amaoka† Hamid Laga‡ Suguru Saito† Masayuki Nakajima†

†Graduate School of Information Science and Engineering‡Global Edge Institute

Tokyo Institute of TechnologyE-mail: [email protected],{hamid,suguru,nakajima}@img.cs.titech.ac.jp

ABSTRACT

In this paper we focus on the Personal Space (PS) as a non-verbal communication concept to build a new Human Com-puter Interaction. The analysis of people positions with re-spect to their PS gives an idea on the nature of their relation-ship. We propose to analyze and model the PS using Com-puter Vision (CV), and visualize it using Computer Graph-ics. For this purpose, we define the PS based on four pa-rameters: distance between people, their face orientations,age, and gender. We automatically estimate the first two pa-rameters from image sequences using CV technology, whilethe two other parameters are set manually. Finally, we cal-culate the two-dimensional relationship of multiple personsand visualize it as 3D contours in real-time. Our methodcan sense and visualize invisible and unconscious PS dis-tributions and convey the spatial relationship of users by anintuitive visual representation. The results of this papercanbe used to Human Computer Interaction in public spaces.

Keywords: Personal Space, People Detection, People Track-ing, Face Detection.

1. INTRODUCTION

Recent advances in Human Computer Interfaces (HCI) broughta new range of commercial products that aim at connectingthe physical and the virtual worlds by allowing the user tocommunicate with the computer in a natural way.

The goal of this paper is to provide a human-computerinteraction tool in which the computer detects and tracksthe user’s states and his relation with other users, then ini-tiates actions based on this knowledge rather than simplyresponding to user commands. In this work we focus on theconcept of Personal Space (PS) [5] which is a non-verbaland a non-contact communication channel. Everyone holds,preserves, updates this space, and reacts when it is violatedby another person. In public spaces, for example, peopleimplicitly interact with each other using the space aroundthem. Based on this concept, we provide a new human-computer interaction framework.

In a first step, we propose a mathematical model of thePS based on four parameters: the distance between people,their face orientations, age, and gender. We automatically

estimate the first two parameters from image sequences us-ing Computer Vision (CV) technology, while the two otherparameters are set manually. Based on this model we cal-culate the two-dimensional relationship of multiple personsand visualize it as 3D contours in real-time.

The remaining parts of the paper are organized as fol-lows; Section 1.1 reviews the related work. Section 1.2outlines the main contributions of the paper. Section 2 de-tails the mathematical model we propose for modeling thePS. Section 3 describes the system and algorithms we pro-pose for estimating the parameters of the PS. Results arepresented and discussed in section 4. We conclude in sec-tion 5.

1.1 Related work

Existing Human Computer Interaction (HCI) technologiesare mostly based on explicit user inputs acquired throughdevices such as keyboard, mouse, and joystick. There havebeen also many attempts to emulate these devices using cam-eras and speech so that the user will be able to use his handsfor other purposes. Examples of such input devices includethe Camera mouse [1] and voice commands in home envi-ronments [8].

There is however no HCI technology that interprets themeaning of distances between people, neither the use ofcommunication through space and distance. This conceptis reflected in the notion of Personal Space (PS) which is anon-verbal communication and behavior.

The concept of Personal Space, since its introductionby Edward Hall [5, 4] and the discussion by Robert Som-mer [7], is studied and applied in many fields; Psychol-ogists studied the existence of PS in Second Life [11, 2].They found that many users keep a PS around their avatarand behave in accordance with it in the virtual world. PS isalso one factor that makes a virtual agent behave naturallyand human like inside the virtual world [3]. In robotics, thePS is considered as a factor for selecting a communicationmethod between a robot and a human [9]. It can be used forexample to estimate how intimate a robot is to other users.

The PS concept is based on many complex rules. Itsshape and size are affected by several factors such as gender,age, and social position.

Page 3: Personal space-based modeling of relationships between ...€¦ · Examples of such input devices include the Camera mouse [1] and voice commands in home envi-ronments [8]. There

Intimate

Distance

Fig. 1: Definition of the Personal Space.

1.2 Overview and contributions

We propose a HCI framework based on the concept of PSto create interactive art. The idea is to interpret distancesbetween people and make interaction with it. The proposedmethod estimates the shape of the personal space by mea-suring first the user’s position and face orientation. Thenwe can simulate the way people communicate between eachother through the interaction of their personal spaces. Itgenerates and varies the contents according to the relation-ship between users.

The contributions of this paper are three-fold; first wepropose a mathematical model of the PS. It is controlled byfour parameters: the user’s age, gender, position in the 2Dspace, and the face orientation. Second, we propose to esti-mate these parameters. User’s position and face orientationare estimated automatically using a set of cameras mountedaround the operation area. At the current stage the genderand age are set manually by the user. Finally, we visualizethe PS in 3D using computer graphics.

The proposed method in this paper enables the detectionof the user’s mobile territory and his relationship with other.Our targeted application is interactive and collaborativedig-ital art, but results of this work can be applied to modelingthe behavior of virtual agents, as well as analyzing peoplebehavior in a crowd.

2. MODELING THE PERSONAL SPACE

Edward T.Hall in his study of human behaviors in publicspaces [5] found that every person holds unconsciously amobile territory surrounding him like bubbles. The viola-tion of this personal space by a tierce person results in aneffective reaction depending on the relation of the two per-sons. This suggests that the concept of PS is a non-verbalcommunication between two or more persons. The personalspace as defined by Edward T.Hall and shown in Fig. 1 iscomposed of four areas: the intimate space, the personal

space, the social space and the public space.The shape of the PS is affected by several parameters. In

this paper we consider four of them: gender, age, distance,and face orientation. The relationship between gender andPS is well studied by sociologists [6]. Many previous stud-ies [6] suggested that the shape of the PS varies with theface orientation. For example, the PS is twice wider in thefront area of a person than in the back and side areas.

2.1 The model of the PS

Given a personP located at coordinatesp(x, y) we definea local coordinate system centered atp, with X axis alongthe face andY axis along the sight direction as shown inFig. 1. The personal space around the personP can thenbe defined as a functionΦp which has its maximum atpand decreases as we get far fromp. This can be representedby a two-dimensional Gaussian functionΦp of covariancematrixΣ, and centered atp:

Φp(q) = e−1

2(q−p)tΣ−1(q−p). (1)

whereΣ is a diagonal matrix:

Σ =

(

σ2xx 00 σ2

yy

)

. (2)

The parametersσxx and σyy define the shape of the PS.Considering the fact the PS is twice wider in front alongthe sight line than the side (left and right) areas, we defineσyy = 2σxx.

This model assumes that the shape of the front and backareas of the PS are similar. However, previous studies pointedout that people are more strict regarding their frontal space.Shibuya [6] defines the PS in the front of people as twicelarger as the back, left and right ares. We use this definitionin our implementation. We build this model by blendingtwo Gaussian functions as follows:

Φp(q) = δ(yq)Φ1p(q) + (1 − δ(yq))Φ

2p(q). (3)

whereq = (xq, yq)t, δ(y) = 1 if y ≥ 0, and0 otherwise.

Φ1p models the frontal area of the person and is defined as a

2D Gaussian function of covariance:

Σ1 =

(

σ2xx 00 4σ2

xx

)

. (4)

Φ2p models the back area of the person and is defined as a

2D Gaussian function of covariance

Σ2 =

(

σ2xx 00 σ2

xx

)

. (5)

Notice that the standard deviation ofΦ1p along theY axis is

twice the standard deviation ofΦ2p along the same axis. The

functionδ blends the two functions and therefore it allowsto take into account the face orientation. This concept isillustrated in Fig 2.

In our implementation we defineσxx as the threshold towhich a specific zone of the space is violated. For exam-ple, to model the intimate space of a standard person we setσxx = σ0 = 0.45/2 = 0.255m as shown in Fig. 1. The fig-ure gives also the standard values ofσxx for different zonesof the PS.

Page 4: Personal space-based modeling of relationships between ...€¦ · Examples of such input devices include the Camera mouse [1] and voice commands in home envi-ronments [8]. There

�Ðxx-�Ðxx

Y

x�Ðxx

�Ðyy��2�Ðxx

x

Y

Fig. 2: The Personal Space model based on the face orien-tation. The PS in the front area of a person is wider that theback and side areas.

2.2 Parameterization of the PS

The personal space does not depend only on the positionand face orientation but also on factors related to the personsuch as age, gender, social position, and character. Thesepersonal factors can be included in a functionf that affectsthe value of the standard deviationσxx:

σxx = f(σ0). (6)

In the simplest case,f can be a linear function that scalesσ0 with a factorα reflecting how much a person is kin toprotect his intimate space. We adopt this model in our im-plementation.

In summary our Personal Space model is parameterizedby two types of factors:

• The inter-personal factors which are the distances andface orientations are embedded inside the parameterσxx. These two parameters are detected automati-cally as will be explained in Section 3.

• The personal parameters such as age, gender, and so-cial position embedded in the functionf . These pa-rameters are input manually by the user.

In the following we describe the system we developed forestimating the inter-personal factors used for simulatingthePersonal Space.

3. SYSTEM

Using the model of Eq. 3 for interaction requires the esti-mation of the user’s position and face orientation. We builda computer vision platform to track the user’s behavior infront of the screen. The system, as shown in Figure 3, iscomposed of four IEEE1394 cameras: one overhead camerafor people detection and tracking, and three frontal camerasfor face detection and orientation estimation.

PC

Camera 1

Camera 2

7.6 m

Camera 3

Camera 4

7.6 m

3.8 m

Fig. 3: Overview of the system setup.

3.1 Multiple user tracking

We detect people by detecting and tracking moving blobs inthe scene:

• First, using the overhead camera, we take shots ofthe empty scene and use them to build a backgroundmodel by learning a Mixture of Gaussian classifier.Given a new imageI, the classifier classifies everypixel into a background or foreground.

• The persons are detected as connected components ofthe foreground pixels.

• Second, during the operation, we track people overframes using the Mean Shift algorithm.

For efficient detection and tracking, we setup the system ina studio under controlled lighting conditions.

3.2 Estimation of the face orientation

To estimate the face orientation in the 3D space we

• Detect faces from each of the three frontal camerasusing the Viola and Jones face detector [10].

• Assign the detected facial images to each of the per-sons detected with the overhead camera.

Since the entire camera system is calibrated, i.e., the trans-formation matrices between the frontal cameras and betweeneach frontal camera and the top one are computed in ad-vance, the faces are assigned to the persons by assigningeach face to its closest blob in the overhead camera.

Figure 4 shows an example where people positions andfaces are detected using our system.

4. RESULTS

To visualize the Personal Space of a person we consider thethree levels as shown on Fig. 1 and define a PS function

Page 5: Personal space-based modeling of relationships between ...€¦ · Examples of such input devices include the Camera mouse [1] and voice commands in home envi-ronments [8]. There

Camera 1 Camera 2 Camera 3 Camera 4

Fig. 4: Example of person and face detection and tracking using four cameras.

X

Y

Camera 1 Camera 2 Camera 3 Camera 4

XY

Fig. 5: Visualization of the personal area of the personal space. The user is facing Camera 3 (X axis).

for each zone using Equation 3. The user can set manuallythe factorα for each zone. Others like position and faceorientation are estimated automatically.

The PS function as defined in Equation 3 can be alsointerpreted as the degree of the user’s response to the viola-tion of a zone in his PS. Depending on his relation with thesecond person, the user activates one of the three functions.

To visualize this we map the image space (taken by thetop camera) onto theXY (the floor) plane of the world coor-dinate system. Since we used only the frontal face detector,we detect three orientations of the face0, 90 and180 de-grees. Figure 5 shows an example where the user is facingCamera 3. Notice that camera 3 has detected the face andtherefore the area of the PS in front of the user is larger thanback and side areas.

Figure 6 shows an example where the user is standingand rotating only his face from one camera to another. No-tice how the personal space evolves with the face orienta-tion. This particularly proves the efficiency of our model.

5. CONCLUSION

In this paper we have proposed a mathematical model of thepersonal space. We particularly implemented a method forautomatically estimating two parameters of the PS: the peo-ple position and their face orientation. As extension we planin future to automate the estimation of the other parameterssuch as age and gender. Possible extensions include alsothe improvement of face detection algorithms for detectingfaces at different orientations. We plan also in the future touse this personal space model for HCI and also modelingvirtual agents behavior.

Acknowledgement

Hamid Laga is supported by the Japanese Ministry of Ed-ucation, Culture, Sports, Science and Technology programPromotion of Environmental Improvement for Independenceof Young Researchers under the Special Coordination Fundsfor Promoting Science and Technology.

Page 6: Personal space-based modeling of relationships between ...€¦ · Examples of such input devices include the Camera mouse [1] and voice commands in home envi-ronments [8]. There

X

Y

X

Y

X

Y

XY

XY

XY

X

Y

Fig. 6: Visualization of the PS when the face is rotating fromcamera 2 to camera 4.

6. REFERENCES

[1] Camera mouse. http://www.cameramouse.org/ (Ac-cessed on November 15, 2009)..

[2] FRIEDMAN , D., S. A. . S. M. Spatial social behaviorin second life.Intelligent virtual agents 2007(2007),285–295.

[3] GILLIES , M., AND SLATER, M. Non-verbal commu-nication for correlational characters.Proceedings ofthe 8th Annual International Workshop on Presence,London(September 2005).

[4] HALL , E. T. Proxemics.Current Anthropology 9, 2-3(1968), 83–108.

[5] HALL , E. T. The Hidden Dimension: Man’s use ofSpace in Public and Private. Anchor Books, Re-issue,1990.

[6] SHOZO, S. Comfortable distance between people:Personal Space. Japan Broadcast Publishing Co., Ltd,1990.

[7] SOMMER, R. Personal Space: The Behavioral Basisof Design. Prentice Hall Trade, 1969.

[8] SORONEN, H., TURUNEN, M., AND HAKULINEN , J.Voice commands in home environment - a consumersurvey - evaluation.

[9] TSUYOSHITASAKI , SHOHEI MATSUMOTO, H. O. S.Y. M. T. K. K. T. O., AND OKUNO, H. G. Dynamiccommunication of humanoid robot with multiple peo-ple based on interaction distance.Transactions of theJapanese Society of Artificial Intelligence 20(2005),209–219.

[10] V IOLA , P., AND JONES, M. Robust real-time facedetection. International Journal of Computer Vision55, 2 (May 2004), 137–154.

[11] YEE, N. AND , B. J. N., URBANEK, M., CHANG, F.,AND MERGET, D. The unbearable likeness of beingdigital: The persistence of nonverbal social norms inonline virtual environments.Journal of CyberPsychol-ogy and Behavior 10(2007), 115–121.