14
The roles of sensory modalities in collaborative virtual environments (CVEs) Chang S. Nam a, * , Joseph Shu b , Donghun Chung c a Department of Industrial Engineering, University of Arkansas, Fayetteville, AR 72701, USA b Handshake VR Inc. Waterloo, ON N2L 5C6, Canada c School of Communication, Kwangwoon University, Seoul, 139-701, Korea Available online 18 September 2007 Abstract This study was conducted to assess the effects of sensorial modalities on user performance, per- ception, and behavior in collaborative virtual environments (CVEs). Participants played a CVE game, air hockey, together with a remote partner under different sensory modality conditions, depending on the type of sensory feedback provided: visual-only (V), visual–haptic (V + H), and visual–haptic–audio feedback (V + H + A). Three types of measurements were used as dependent variables: (1) task performance measured as playing time, (2) user perception including the sense of presence, the sense of togetherness, and perceived collaboration, and (3) behavior measurement including the amount of force applied and the mallet deviation. Results of the study indicated that the task performance, perception, and user behavior in CVEs can be affected due to supported sen- sory modalities. Therefore, the multiple sensory information types that are required to perform the task at hand should be provided to effectively support collaboration between people in CVEs. The outcomes of this research should have a broad impact on multimodal user interaction, including research on physiological, psychophysical, and psychological mechanisms underlying human percep- tion on multisensory feedback in CVEs. Ó 2007 Elsevier Ltd. All rights reserved. Keywords: Collaborative virtual environments (CVEs); Haptic feedback; Presence; Copresence; Collaboration 0747-5632/$ - see front matter Ó 2007 Elsevier Ltd. All rights reserved. doi:10.1016/j.chb.2007.07.014 * Corresponding author. Tel.: +1 479 575 2563; fax: +1 479 575 8431. E-mail addresses: [email protected] (C.S. Nam), [email protected] (J. Shu), [email protected] (D. Chung). Available online at www.sciencedirect.com Computers in Human Behavior 24 (2008) 1404–1417 Computers in Human Behavior www.elsevier.com/locate/comphumbeh

The roles of sensory modalities in collaborative virtual environments (CVEs)

Embed Size (px)

Citation preview

Page 1: The roles of sensory modalities in collaborative virtual environments (CVEs)

Available online at www.sciencedirect.com

Computers in

Computers in Human Behavior 24 (2008) 1404–1417

Human Behavior

www.elsevier.com/locate/comphumbeh

The roles of sensory modalities incollaborative virtual environments (CVEs)

Chang S. Nam a,*, Joseph Shu b, Donghun Chung c

a Department of Industrial Engineering, University of Arkansas, Fayetteville, AR 72701, USAb Handshake VR Inc. Waterloo, ON N2L 5C6, Canada

c School of Communication, Kwangwoon University, Seoul, 139-701, Korea

Available online 18 September 2007

Abstract

This study was conducted to assess the effects of sensorial modalities on user performance, per-ception, and behavior in collaborative virtual environments (CVEs). Participants played a CVEgame, air hockey, together with a remote partner under different sensory modality conditions,depending on the type of sensory feedback provided: visual-only (V), visual–haptic (V + H), andvisual–haptic–audio feedback (V + H + A). Three types of measurements were used as dependentvariables: (1) task performance measured as playing time, (2) user perception including the senseof presence, the sense of togetherness, and perceived collaboration, and (3) behavior measurementincluding the amount of force applied and the mallet deviation. Results of the study indicated thatthe task performance, perception, and user behavior in CVEs can be affected due to supported sen-sory modalities. Therefore, the multiple sensory information types that are required to perform thetask at hand should be provided to effectively support collaboration between people in CVEs. Theoutcomes of this research should have a broad impact on multimodal user interaction, includingresearch on physiological, psychophysical, and psychological mechanisms underlying human percep-tion on multisensory feedback in CVEs.� 2007 Elsevier Ltd. All rights reserved.

Keywords: Collaborative virtual environments (CVEs); Haptic feedback; Presence; Copresence; Collaboration

0747-5632/$ - see front matter � 2007 Elsevier Ltd. All rights reserved.

doi:10.1016/j.chb.2007.07.014

* Corresponding author. Tel.: +1 479 575 2563; fax: +1 479 575 8431.E-mail addresses: [email protected] (C.S. Nam), [email protected] (J. Shu), [email protected]

(D. Chung).

Page 2: The roles of sensory modalities in collaborative virtual environments (CVEs)

C.S. Nam et al. / Computers in Human Behavior 24 (2008) 1404–1417 1405

1. Introduction

As an intrinsic aspect of human work behavior, collaboration between people has recentlygained considerable research interest, particularly in multi-user virtual reality (VR) systemssuch as collaborative (or shared) virtual environments. Collaborative virtual environments(CVEs) refer to computer-enabled, shared virtual spaces in which multiple users can interactwith each other in real time while receiving multiple sensory feedbacks such as visual, audio,and haptic. CVEs provide new ways for geographically remote people to work together innetworked virtual environments by supporting collaborative efforts such as solving Rubik’scube puzzle (Schroeder et al., 2001), moving a ring on a wire together (Basdogan, Ho, Srin-ivasan, & Slater, 2000), and having virtual meetings (Nilsson, Heldel, Axelsson, & Schroe-der, 2002). In addition, CVEs allow users to feel as if they were in the same real spacewith a full range of sociological interactions (Casanueva & Blake, 2000). Finally, CVE users,represented by avatars, can communicate with each other via chat, gesture, and objectmanipulation (Redfern & Naughton, 2002). Because CVEs provide rich, collaborative envi-ronments that facilitate action and interaction, they have been applied in areas such as train-ing (Stansfield, Miner, Shawver, & Rogers, 1995), education (Prasolova-Forland & Divitini,2002), entertainment (Munro, 1999), and engineering simulation (West & Hubbold, 2001).

Despite these contributions, however, CVEs still warrant further considerations tomake a significant impact in most fields. First, few studies have examined how the collab-oration between people in CVEs is affected due to supported sensory modalities. In par-ticular, research issues pertaining to the effects of haptic feedback on people’s ability tocollaborate and on their perception of the collaborative environment need to be studied(Buttolo, Oboe, & Hannaford, 1997; Durlach & Slater, 2000; Sallnas, Rassmus-Grohn,& Sjostrom, 2000). In addition, many aspects of collaboration in CVEs still remainunclear, including task performance, perceived presence, and sense of togetherness. Fur-thermore, understanding how the users behave in CVEs when performing a task togetherhas been grossly overlooked by researchers.

The primary objective of this study was to address these issues by evaluating the roles ofsensory modalities CVEs support. More specifically, an empirical experiment was con-ducted to assess how sensory modalities such as visual, auditory, and haptic affect (1) taskperformance, (2) perceived presence, (3) the sense of togetherness between two people atdifferent locations, and (4) the user behavior in CVEs.

We first describe the sense of presence in virtual environments (VEs) and its contributingfactors, followed by discussion of the relationship between perceived presence and task per-formance. Then, we discuss task performance in relation to the sensory modality and thesense of copresence in VEs. Following the description of the experiment, we show and ana-lyze the results obtained. Finally, we present directions for future work and conclusions.

2. Background

2.1. The sense of presence and performance in virtual environments

The wide acceptance of virtual environments has often been linked to their richness ofsensory information and realness of the experience (Barfield & Furness, 1995; Ellis, 1992;Held & Durlach, 1992). In effect, one of the main goals in the areas of VE is to generate anexperience in a computer-generated environment that feels like reality. Terms like presence

Page 3: The roles of sensory modalities in collaborative virtual environments (CVEs)

1406 C.S. Nam et al. / Computers in Human Behavior 24 (2008) 1404–1417

(Lombard & Dittion, 1997), telepresence (Steuer, 1992), and virtual presence (Schloerb,1995) have been used interchangeably to define the manner in which people experiencetechnologically-mediated environments. Presence in virtual environments is here definedas ‘‘the degree to which participants feel that they are somewhere other than where theyphysically are when they experience the effects of a computer-generated simulation’’(Bystrom, Barfield, & Hendrix, 1999, p. 241).

The level of presence can be affected by various factors. For example, the user’s perceivedpresence is dependent on the degree to which spatial, auditory, and haptic transformationsof objects in a virtual environment mimic the same types of transformations of the objects inthe real world (Barfield, Hendrix, & Bystrom, 1999). Witmer and Singer (1998) have pro-posed four interposed factors that may hinder or facilitate presence: control (degree of con-trol, immediacy of control, anticipation of control, mode of control, and physicalenvironmental control), sensory (sensory modality, environmental richness, multimodalsensory, consistency of multimodal information, degree of movement perception, and activesearch), distraction (isolation, selective attention, and interface awareness), and realism(graphical realism, consistency of information with the objective world, meaningfulnessof experience, and separation anxiety/disorientation). The level of presence can also beinfluenced by the nature of the task itself (Bystrom et al., 1999) as well as individual differ-ences in preferences for information displayed in various modalities (Slater & Wilbur, 1995).

Previous research has shown that high levels of presence can have significant effects onthe user performance and behavior in VEs (Bystrom et al., 1999; Slater, Linakis, Usoh, &Kooper, 1996; Youngblut & Huie, 2003). For example, Youngblut and Huie (2003) foundthat the sense of presence experienced during VE-based training can impact a user’s abilityto learn mission procedures. Bystrom et al. (1999) also hypothesized that the sense of pres-ence in an environment is a necessary condition for performance to occur. Slater et al.(1996) maintain that the sense of presence contributes to user behavior that more closelymimics real-world behavior, thereby improving performance.

However, it is also important to note that the relationship between the level of presenceand task performance in VEs is not straightforward. In Hendrix’s (1994) study of evalu-ating the relationship between the level of presence (as a function of visual and auditorydisplay parameters) and task performance, she found that the addition of stereopsis to thevisual display increased reported levels of presence and subjective assessments of one’s suc-cess in performing the spatial judgments. However, actual performance was not improvedwith participants using stereoscopic vision. Slater and Wilbur (1995) also claim that ahigher level of presence does not necessarily improve the performance as the sense of pres-ence can be mediated by various factors.

2.2. Sensory modality, copresence, and performance in VEs

A key feature of collaborative virtual environments (CVEs) is the ability to support peo-ple’s collaborative work. Buttolo et al. (1997, p. 422) define three types of shared interac-tion: ‘‘(1) browsing static environment, such as feeling haptic information in documents,databases, web pages; (2) sharing collaborative environments, in which users alternate inmanipulating a common environment; and (3) interacting with cooperative environments,in which the task requires the simultaneous action of more than one user’’. The term copres-ence (Schroeder, 2002) and social presence (McIsaac & Gunawardena, 1996; Zhao, 2004)are used to describe the user’s sense of togetherness in such shared environments.

Page 4: The roles of sensory modalities in collaborative virtual environments (CVEs)

C.S. Nam et al. / Computers in Human Behavior 24 (2008) 1404–1417 1407

Studies have shown that modalities supported in VEs indeed influence users’ task per-formance and their sense of copresence (Basdogan et al., 2000; Hubbold, 2002; Sallnaset al., 2000). For example, haptic force feedback enhanced the sense of sharing and eachuser’s perception on his/her partner’s actions when carrying a stretcher together in a vir-tual chemical plant (Hubbold, 2002). Additional haptic feedback could also enhance per-ceived togetherness and improve task performance when pairs of people moved a ring on awire collaboratively (Basdogan et al., 2000), as well as when they put cubes together inorder to build one large cube (Sallnas et al., 2000). Copresence and presence often tendto co-vary, that is, if users have a stronger sense of presence, they also tend to have a stron-ger sense of copresence (Slater, Sadagic, Usoh, & Schroeder, 2000).

Collaborative tasks in CVEs can be affected by certain technology factors such as band-width, communication capabilities, and ease of navigation (Axelsson, 2002; Becker &Mark, 2002; Nilsson et al., 2002), as well as social factors including interpersonal relation-ships (Cheng, Farnham, & Stone, 2002), trust (Schroeder & Axelsson, 2001), identity interms of appearance and name, and non-verbal communication (Smith, Farnham, &Drucker, 2002). However, the extent to which the addition of other sensory modalitiessuch as haptic and thermal feedbacks contributes to the users’ shared experience hasnot been studied much (Basdogan et al., 2000; Nam & Chung, 2006; Sallnas et al., 2000).

3. Method

To assess the effects of sensorial modalities on the user performance and perception (thesense of presence and copresence) with a collaborative virtual environment, this study useda collaborative game task in which participants played an air hockey game together with aremote partner under different sensory feedback conditions. User behavior measurementswere also collected and analyzed in order to understand how much amount of force par-ticipants applied to hit the puck, as well as how they used their mallet during the game.

3.1. Participants

Thirty participants were recruited from the student population at the University ofArkansas via posted flyers. Participants were given monetary compensation for their par-ticipation. All participants had normal or corrected-to-normal vision. There were 10female and 20 male participants whose mean (M) age was 23.4 years (standard deviation,SD = 3.8). All the participants played less than 2 h of 3D video games per week.

3.2. Apparatus

3.2.1. Perception questionnaire

A questionnaire with Likert-type rating scales was developed to measure users’ percep-tion of the collaborative virtual environment by modifying questions used in Schroederet al. (2001) study. The questionnaire contains nine items measuring perceived collabora-tion, the sense of presence, and the sense of copresence, each having three questions. Allitems were rated on a 6-point scale, in which 1 = to a very small extent and 6 = to a veryhigh extent.

Perceived sense of presence was measured by asking three questions regarding howpresent the participants felt in the CVE supporting different sensory modalities: (1) to what

Page 5: The roles of sensory modalities in collaborative virtual environments (CVEs)

1408 C.S. Nam et al. / Computers in Human Behavior 24 (2008) 1404–1417

extent did you have the experience of being in the room where you played the game?; (2) towhat extent did you experience the environment as a place you played the game ratherthan something that you were looking at?; and (3) to what extent did you feel like you werehitting an actual air hockey puck?

To measure the subjective sense of being together with another person in a computer-generated environment (Slater et al., 2000; Steed, Slater, Sadagic, Tromp, & Bullock,1999), three questions were developed: (1) to what extent did you have a sense of beingin the same room as your partner?; (2) during the game, to what extent did you have asense that you are together with your partner in the same room?; and (3) to what extentwas your experience in playing with your partner today like that other real experience,with regard to your sense of doing something together?

Three questions were also asked to find out how participants thought they collaboratedto play the game and enjoyed the collaboration: (1) to what extent did you experience thatyou and your partner played the game well?; (2) to what extent did you enjoy collaboratingwith your partner in today’s game?; and (3) to what extent would you, on another occa-sion, like to play a similar game with your partner?

3.2.2. Air hockey game

The air hockey game used in this study is a virtual reality game that mimics traditionalair hockey table games that can be found in many entertainment and recreational facilities.The game was developed by using commercial rapid prototyping software called pro-SENSEe (Handshake VR, Inc.). proSENSEeis a MATLAB�/Simulink� toolbox thatuses a drag and drop type of graphical programming technique. It allows the rapid devel-opment of haptic applications without the need to write low level C/C++ codes for inter-facing with the haptic device (e.g., rapid graphic and haptic rendering). Fig. 1 illustrates ascreen shot of the Air Hockey game and its experimental environment, in which two peo-ple at a remote location played the game together while receiving multiple sensory modal-ities (e.g., visual, haptic, and auditory feedbacks).

Two haptic devices called PHANTOM� Omnie (SensAble Technologies) are con-nected to a computer, in which one Omnie is used to control the left mallet, and the sec-ond Omnie is used to control the right mallet. The black disc in Fig. 1 represents the puckthat both players can use their mallets to hit. Players hit the puck into an opponent’s goal,represented by the black slot at the end of the table on both sides, while securing the homegoal. Players can catch, dribble, and hit the puck into their partner’s goal. The game isover when either player scores seven points first.

When the user pushes the mallet into the table, there would be an opposing force fromthe table (i.e., pushing back up), making the user feel like he/she is touching the table. Thespring–damper model was used to calculate the force that is exerted when the user pushesthe mallet. For example, the force is generated by the following Hooke’s spring law

F ¼ k � x

where, F – force, k – spring constant, x – displacement.The k term in the above equation represents the stiffness of the table, and x represents

how much the mallet has penetrated into the table. Thus, the deeper the user pushes themallet into the table, the harder the force that is generated to oppose the motion. Whenhitting the puck the user can also feel the impact force that is generated by the mallet,which is also created in the same way as the opposing force.

Page 6: The roles of sensory modalities in collaborative virtual environments (CVEs)

Fig. 1. Screenshot of the air hockey game and experimental environment.

C.S. Nam et al. / Computers in Human Behavior 24 (2008) 1404–1417 1409

3.3. Experimental design

The experimental design was a one-way, within-subjects design with sensory modalityas the factor. That is, there were three experimental conditions in the study, dependingon the type of sensory feedback provided:

(1) visual-only feedback (V)(2) visual and haptic feedback (V + H)(3) visual, haptic, and audio feedback (V + H + A)

In the present study, we followed an experimental method used by Basdogan et al.(2000). That is, participants were asked to play the air hockey game together with theirremote partner whom they were not allowed to meet and know. They also did not knowwhere their partner was located. This person was an ‘‘expert’’ player of the air hockeygame and performed, as far as possible, with constant performance throughout the game.Thus, only participant data has been collected in the present study, and not the partner.The participants and the partner were in different rooms, but shared a common visualscene of air hockey table (see Fig. 1).

In the visual-only (V) condition participants played the game with their partner withoutreceiving any haptic and audio feedback. On the other hand, additional haptic feedbackwas provided to players through their personal haptic device in the visual–haptic condition

Page 7: The roles of sensory modalities in collaborative virtual environments (CVEs)

1410 C.S. Nam et al. / Computers in Human Behavior 24 (2008) 1404–1417

(V + H). In the visual–haptic–audio condition (V + H + A) players could also hear thesound during the game, for example, when they hit the puck as well as the puck was hitto the wall of the table.

3.4. Dependent variables

To investigate the role of sensory modalities in CVEs, this study used several dependentmeasures, which can be categorized into three types of variables: task performance, percep-tion, and behavior. Task performance was measured as playing time that either player spentscoring 7 points first. Perception measurements were determined by the questionnaire interms of the extent to which participants experienced the sense of presence and the sense oftogetherness. Perceived collaboration was also measured by the questionnaire in terms ofthe extent to which participants thought they collaborated to play and enjoyed the collabo-ration. Finally, behavior measures included the amount of force applied by participantsand the mallet deviation. The mallet deviation (unit: meter) was determined by measuringthe absolute distance between the mallet and the table. The force was measured as an averageamount of force applied (unit: Newton) during the game. Since we are only interested in themagnitude of the force that is used to hit the puck, only the x (representing movement of theleft and right of the screen) and z (representing movement of the top and bottom of the screen)direction forces were needed (the y-directional force represents the force that is pushed backfrom the table). The following equation was used to calculate the force magnitude

jF j ¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiF 2

x þ F 2z

q

where, jFj – force magnitude, Fx – x-directional force, Fz – z-directional force.

3.5. Procedure

On arrival, participants were introduced to the study and signed consent forms. Afterfamiliarizing themselves with the haptic Omnie device, participants also performed ashort training exercise several times, designed to familiarize them with the air hockeygame. They were told to control the puck by moving their Omni device in any direction.

After familiarization and training sessions, participants played the game together with apartner under the different sensory feedback conditions. They were told to hit the puckinto their partner’s goal seen on the opposite side of the table while securing their homegoal. Whoever scores 7 points first wins the game. After playing the game in each condi-tion, participants were then asked to complete the perception questionnaire designed tomeasure their perceived presence, collaboration, and copresence. At the end of the exper-iment, participants completed a demographic questionnaire including game preference andperception on their partner’s gender.

4. Results

4.1. Performance: playing time

We first measured playing time that either player spent scoring 7 points first. In overall,it took participants 37.8 s on average (SD = 13.7) to play one game. Participants lost most

Page 8: The roles of sensory modalities in collaborative virtual environments (CVEs)

C.S. Nam et al. / Computers in Human Behavior 24 (2008) 1404–1417 1411

of the games to the remote partner (i.e., expert player): the percentage of victories by theparticipants to the total number of matches was 16.7% (15 out of 90 games). Mean timesacross the three conditions are displayed in Fig. 2.

A one-way ANalysis Of VAriance (ANOVA) was used to test hypotheses about differ-ences between two or more groups. The analysis revealed a significant effect for the sensoryfeedback condition, F(2, 58) = 6.82; p = .0022. Contrasts showed that the length of gameplay in the visual-only condition (M = 31.3 s, SD = 11.1) was shorter than in the visual–haptic condition (M = 40.1 s, SD = 11.3), F(1, 29) = 8.08; p = .0081, and in the visual–haptic–audio condition (M = 42.0 s, SD = 16.1), F(1, 29) = 9.45 p = .0046. However,there was no difference in playing time between the visual–haptic and the visual–haptic–audio conditions (p > .5).

4.2. Perception

The reliability of the perception rating scales was first assessed by calculating Cron-bach’s coefficient-a (Cronbach, 1951). The standardized alpha of rating scales showedacceptable reliability (presence, a = .92; Copresence, a = .85; collaboration, a = .87), withcoefficient greater than the suggested value of .70 (Nunnaly, 1978). Fig. 3 shows mean val-ues across the three different conditions on the three perception measures.

4.2.1. The sense of presenceA one-way, repeated-measures ANOVA showed that there was a significant difference

in the level of presence between the sensory feedback conditions, F(2, 58) = 52.98,p < .0001. Contrasts showed that participants reported a significantly stronger sense ofpresence when playing the game with additional haptic and audio feedbacks (M = 4.60;SD = 0.92) than when playing in the visual–haptic condition (M = 3.66; SD = 1.12),F(1, 29) = 21.33, p < .0001, and in the visual-only condition (M = 2.27; SD = 1.02), F(1,29) = 85.19, p = < .0001. Contrasts also showed that visual–haptic groups expressed a sig-nificantly stronger sense of presence, compared to visual-only groups, F(1, 29) = 38.13,p < .0001.

0

5

10

15

20

25

30

35

40

45

Visual-Only Visual+Haptic Visual+Haptic+Audio

Feedback Condition

Tim

e (S

ec.)

Fig. 2. Mean times across the feedback conditions.

Page 9: The roles of sensory modalities in collaborative virtual environments (CVEs)

0.00

1.00

2.00

3.00

4.00

5.00

6.00

Presence Copresence Collaboration

Peception Measures

Mea

n Sc

ores

Visual-Only Visual+Haptic Visual+Haptic+Audio

Fig. 3. Mean values across the three different conditions on the perception measures.

1412 C.S. Nam et al. / Computers in Human Behavior 24 (2008) 1404–1417

4.2.2. The sense of copresence

Results indicated a significant difference in the sense of togetherness between the sen-sory feedback conditions, F(2, 58) = 48.35, p < .0001. Contrasts showed that participantsreported a significantly stronger sense of copresence when playing the game with addi-tional haptic and audio feedbacks (M = 4.38; SD = 0.72) than when playing in thevisual–haptic condition (M = 3.21; SD = 1.13), F(1, 29) = 46.77, p < .0001, and in thevisual-only condition (M = 2.35; SD = 1.32), F(1, 29) = 70.15, p < .0001. Contrasts alsoshowed that visual–haptic groups expressed a significantly stronger sense of presence,compared to visual-only groups, F(1, 29) = 18.07, p = .0002.

4.2.3. Perceived collaboration

An ANOVA showed that there was a significant difference in the perceived level of col-laboration between the sensory feedback conditions, F(2, 58) = 34.02, p < .0001. Contrastsshowed that participants in the additional haptic and audio feedback condition (M = 4.44;SD = 0.90) reported a significantly stronger sense of collaboration than those in thevisual–haptic condition (M = 3.56; SD = 1.15), F(1, 29) = 15.42, p = .0005, and those inthe visual-only condition (M = 2.29; SD = 1.00), F(1, 29) = 51.47, p<.0001. Contrasts alsoshowed that the visual–haptic group expressed a significantly stronger sense of collabora-tion, compared to the visual-only group, F(1, 29) = 27.15, p < .0001.

When correlating the three perception measures using Pearson’s correlation coefficient,we found significant positive correlations between the variables presence, copresence, andcollaboration (Table 1).

Table 1Correlations between presence, copresence, and collaboration

Presence Copresence Collaboration

Presence 1.00 – –Copresence .77* 1.00 –Collaboration .74* .85* 1.00

* Significant at the 0.01 level.

Page 10: The roles of sensory modalities in collaborative virtual environments (CVEs)

0.00

0.50

1.00

1.50

2.00

2.50

3.00

3.50

4.00

Visual-Only Visual + Haptic Visual+Haptic+AudioFeedback Condition

Forc

e (N

ewto

n)

0.00

0.01

0.02

0.03

0.04

0.05

0.06

0.07

Dev

iatio

n (m

eter

)

Force Mallet Deviation

Fig. 4. Mean values across the three different conditions on the behavior measures.

C.S. Nam et al. / Computers in Human Behavior 24 (2008) 1404–1417 1413

4.3. Behavior

To understand users’ behavior in CVEs, this study collected two types of behavior mea-sures: the amount of force applied and the mallet deviation from the table. Results wereanalyzed using a one-way ANOVA, repeated-measures design. Mean values across thethree conditions on the two behavior measures are displayed in Fig. 4.

4.3.1. Amount of force applied

An ANOVA was performed for the average amount of the force that participantsapplied during the game, and a significant effect of the sensory feedback condition wasfound, F(2, 58) = 50.76, p < .0001. Contrasts showed that participants in the visual-onlycondition (M = ł3.73N; SD = 0.41) applied more force than those in the visual–hapticcondition (M = 2.69N; SD = 0.52), F(1, 29) = 81.81, p < .0001 and those in the visual–haptic–audio condition (M = 2.68N; SD = 0.64), F(1, 29) = 57.78, p < .0001. However,the average amount of the force that participants applied was not significantly differentbetween the visual–haptic and the visual–haptic–audio conditions (p > 0.05).

4.3.2. Mallet deviation

A one-way, repeated-measures ANOVA was performed for the mallet deviation, andthe results demonstrated a significant effect of the sensory feedback condition, F(2,58) = 74.70, p < .0001. Contrasts showed that the average mallet deviation from the tablein the visual-only condition (M = 0.06 m; SD = 0.02) was significantly larger than in thevisual–haptic condition (M = 0.02 m; SD = 0.02), F(1, 29) = 92.37, p < .0001 and in thevisual–haptic–audio condition (M = 0.02 m; SD = 0.02), F(1, 29) = 80.82, p < .0001.However, the average mallet deviation was not significantly different between thevisual–haptic and the visual–haptic–audio conditions (p > 0.05).

5. Discussion and conclusions

The present study began by suggesting that a deeper understanding of the user interac-tion with collaborative virtual environments (CVEs) is needed to effectively support theircollaborative work in CVEs. For this purpose, the roles of sensory modalities CVEs

Page 11: The roles of sensory modalities in collaborative virtual environments (CVEs)

1414 C.S. Nam et al. / Computers in Human Behavior 24 (2008) 1404–1417

support were investigated to see how they contribute to the user’s task performance, per-ception, and behavior.

5.1. Task performance

Overall, results indicated that sensory modality affected users’ task performance, mea-sured by playing time. When playing the game with only visual feedback, the length ofplaying time was shorter than when playing with additional haptic as well as additionalhaptic and audio feedbacks. That is, the game was over earlier in the visual-only condi-tion than other feedback conditions. Users could not control the puck and their malleteffectively with visual-only feedback, because they did not feel the movement (and speed)of the puck and their mallet during the game. The important, practical implication ofthis finding is that sensory information required to perform the task at hand shouldbe provided to improve task performance. For example, the sense of touch was requiredfor air hockey users to control their mallet and hit the puck. This result is consistentwith previous findings that sensory modalities supported by VEs can affect users’ taskperformance (Basdogan et al., 2000; Hubbold, 2002; Sallnas et al., 2000; Slater & Wil-bur, 1995).

5.2. Perception

The results of this study are, in general, consistent with the expectations stated in theBackground section of this report. It was found that sensory modalities supported byCVEs, more specifically, additional haptic and auditory feedbacks positively influencedusers’ perception of the collaborative environment. When receiving additional hapticand auditory feedbacks, for example, users expressed a significantly stronger sense of pres-ence, of togetherness between two people at different locations, and of collaboration withtheir partners. These results are in agreement with the findings of previous studies (e.g.,Basdogan et al., 2000; Hubbold, 2002; Sallnas et al., 2000). We also found that perceivedpresence, copresence, and collaboration co-vary. That is, if users had a stronger sense ofpresence, they also had a stronger sense of copresence and collaboration.

5.3. Behavior

As noted in the Introduction section, one of our primary motivations for conductingthis study was to investigate the ways in which the users work together in a CVE support-ing different sensory modalities. We found that sensory modality factor supported byCVEs can affect users’ behavior. When hitting the puck, for example, visual-only groupsapplied more force than visual–haptic and visual–haptic–audio groups. The reason for thehigher forces when only visual feedback is presented is because when the user cannot feelthe impact of the mallet with the puck it is natural for him/her to try to hit as hard as he/she can. Without the haptic feedback, the user is free to move the mallet as fast as he/shefeels like so that a non-bounding motion is used, which produces a bigger force. When theuser can feel the impact between the mallet and puck through haptic feedback, on theother hand, he/she would know that he/she has made contact, which allows him/her tobe able to adjust the force that he/she feels is enough to move the puck. Therefore, theforce reading would be much smaller as the user knows that the amount of force applied

Page 12: The roles of sensory modalities in collaborative virtual environments (CVEs)

Table 2Correlations between presence, copresence, amount of force applied, and mallet deviation

Presence Copresence Amount of force Mallet deviation

Presence 1.00 – – –Copresence .77* 1.00 – –Amount of force �.45* �.29* 1.00 –Mallet deviation �.44* �.29* �.29* 1.00

* Significant at the 0.01 level.

C.S. Nam et al. / Computers in Human Behavior 24 (2008) 1404–1417 1415

was enough to move the puck. However, additional audio feedback did not help the usersadjust their force to hit the puck in the present study.

It was also found that the average mallet position in the visual-only condition was sig-nificantly more deviated from the table than in the visual–haptic condition and in thevisual–haptic–audio condition. It seems that people found it difficult to determine the tabletop and the position of the mallet in 3D environments without the sense of touch. Addi-tional audio feedback was not helpful enough to reduce the mallet deviations from thetable in the study.

One of the interesting results in the study was that the user’s perception with CVEs canbe correlated to his/her behavior. As shown in Table 2, for example, significant relation-ships between presence, copresence, amount of force applied, and mallet deviation werefound.

When people had a stronger sense of presence and copresence, they could control theirmallet effectively so it moves along the table (i.e., significant negative correlation betweenpresence, copresence, and the mallet deviation). This means that there was a smallerchance for the users to miss hitting the puck. Given that there was a significant negativecorrelation between presence, copresence, and the amount of force applied, participantscould also control the (relevant) amount of force they need to hit the puck, when theysensed a higher level of presence and copresence in the CVE.

An important issue that still needs to be considered is the extent to which the findings ofthis study can be expected to generalize to other types of shared interaction, such asbrowsing static environment, other sharing collaborative environments (e.g., two playerswith the same level of experience), and interacting with cooperative environments (Buttoloet al., 1997). Much more studies are also required to examine how collaboration betweenpeople in such different interaction environments can be affected due to supported sensorymodalities. Therefore, it would be interesting to investigate what roles additional sensoryfeedbacks such as haptic (Durlach & Slater, 2000; Sallnas et al., 2000) and thermal feed-back (Nam & Chung, 2006) have for the user perception with collaborative virtual envi-ronments and whether they affect task performance and the user behavior in varioustask environments.

References

Axelsson, A. S. (2002). The digital divide – Status differences in virtual environments. In R. Schroeder (Ed.), The

social life of avatars: Presence and interaction in shared virtual environments (pp. 188–204). London: Springer.Barfield, W., & Furness, T. A. (1995). Virtual environments and advanced interface design. New York: Oxford

University Press.Barfield, W., Hendrix, C., & Bystrom, K. (1999). Effects of stereopsis and head tracking on performance using

desktop virtual environment displays. Presence: Teleoperators and Virtual Environments, 8, 237–240.

Page 13: The roles of sensory modalities in collaborative virtual environments (CVEs)

1416 C.S. Nam et al. / Computers in Human Behavior 24 (2008) 1404–1417

Basdogan, C., Ho, C., Srinivasan, M., & Slater, M. (2000). An experimental study on the role of touch in shared

virtual environments. ACM Transactions on Computer–Human Interaction, 7(4), 443–460.Becker, B., & Mark, G. (2002). Social conventions in computer-mediated communication: A comparison of three

online shared virtual environments. In R. Schroeder (Ed.), The social life of avatars: Presence and interaction in

shared virtual environments (pp. 19–39). London: Springer.Buttolo, P., Oboe, R., & Hannaford, B. (1997). Architectures for shared haptic virtual environments. Computers

and Graphics, 21, 421–429.Bystrom, K., Barfield, W., & Hendrix, C. (1999). A conceptual model of the sense of presence in virtual

environments. Presence: Teleoperators and Virtual Environments, 8(2), 241–244.Casanueva, J. S., & Blake, E. H. (2000). Presence and copresence in collaborative virtual environments. Technical

Report CS00–07–00. Department of Computer Science, University of Cape Town, South Africa.Cheng, L., Farnham, S., & Stone, L. (2002). Lessons learned: Building and deploying shared virtual

environments. In R. Schroeder (Ed.), The social life of avatars: Presence and interaction in shared virtual

environments (pp. 90–111). London: Springer.Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16, 297–334.Durlach, N., & Slater, M. (2000). Presence in virtual environments and virtual togetherness. Presence:

Teleoperators and Virtual Environments, 9(2), 214–217.Ellis, S. R. (1992). Nature and origin of virtual environments: A bibliographic essay. Computer Systems in

Engineering, 2(4), 321–347.Held, R. M., & Durlach, N. (1992). Telepresence. Presence: Teleoperators and Virtual Environments, 1(2), 109–112.Hendrix, C. (1994). Exploratory studies on the sense of presence in virtual environments as a function of visual

and auditory display parameters. Unpublished master’s thesis, University of Washington, Seattle, WA.Hubbold, R. (2002). Collaborative stretcher carrying: A case study, In Proceedings of 2002 EUROGRAPHICS

workshop on virtual environments, Barcelona, Spain.Lombard, M., & Dittion, T. B. (1997). At the heart of it all: The concept of presence. Journal of Computer-

Mediated Communication, 3(2), http://jcmc.indiana.edu/vol3/issue2/lombard.html, Retrieved on August 25,2006.

McIsaac, M. S., & Gunawardena, C. N. (1996). Distance education. In D. Johnassen (Ed.), Handbook of research

for educational communications and technology (pp. 403–437). New York, NY: Macmillan.Munro, A. (1999). Place, space, inhabitants: Use of space by urban tribes. Workshop position paper. In

Workshop on designing from the interaction out: Using intercultural communication as a framework to design

interactions in collaborative virtual communities group’99 workshop, Phonix, AZ.Nam, C. S., & Chung, D. (2006). The influences of haptic thermal feedback on object identification and perceived

presence. In Proceedings of the 16th world congress of the international ergonomics association (IEA).Maastricht, Netherlands: IEA Press.

Nilsson, A., Heldel, I., Axelsson, A., & Schroeder, R. (2002). The long-term uses of shared virtual environments:

An exploratory study. In R. Schroeder (Ed.), The social life of avatars: Presence and interaction in shared

virtual environments (pp. 112–126). London: Springer.Nunnaly, J. (1978). Psychometric theory. New York, NY: McGraw-Hill.Prasolova-Forland, E., & Divitini, M. (2002). Supporting learning communities with collaborative virtual

environments: Different spatial metaphors. In Proceedings of ICALT 2002. IEEE Press.Redfern, S., & Naughton, N. (2002). Collaborative virtual environments to support communication and

community in internet-based distance education. Journal of Information Technology Education, 1(3), 201–211.Sallnas, E-L., Rassmus-Grohn, K., & Sjostrom, C. (2000). Supporting presence in collaborative environments by

haptic force feedback. ACM Transaction on Computer–Human Interaction, 461–476.Schloerb, D. W. (1995). A quantitative measure of telepresence. Presence: Teleoperators and Virtual

Environments, 4(1), 64–80.Schroeder, R. (2002). Copresence and interaction in virtual environments: An overview of the range of issues. In

Presence 2002: Fifth international workshop, Porto, Portugal (pp. 274–295).Schroeder, R., & Axelsson, A. (2001). Trust in the core: A study of long-term users of active worlds. In Digital

borderlands: A cybercultural symposium, Norrkoping, Sweden.Schroeder, R., Steed, A., Axelsson, A., Heldal, I., Abelin, A., Widestrom, J., et al. (2001). Collaborating in

networked immersive spaces: As good as being there together? Computers and Graphics, 25, 781–788.Slater, M., & Wilbur, S. (1995). Through the looking glass world of presence: A framework for immersive virtual

environments. In: M. Slater (Ed.), FIVE’95 framework for immersive virtual environments. QMW University of

London.

Page 14: The roles of sensory modalities in collaborative virtual environments (CVEs)

C.S. Nam et al. / Computers in Human Behavior 24 (2008) 1404–1417 1417

Slater, M., Linakis, V., Usoh, M., & Kooper, R. (1996). Immersion, presence, and performance in virtualenvironments: An experiment in tri-dimensional chess. In Proceedings of VRST’96, Hong Kong.

Slater, M., Sadagic, A., Usoh, M., & Schroeder, R. (2000). Small group behaviour in a virtual and real

environment: A comparative study. Presence: Teleoperators and Virtual Environments, 9(1), 37–51.Smith, M., Farnham, S., & Drucker, S. (2002). The social life of small graphical chat spaces. In R. Schroeder

(Ed.), The social life of avatars: Presence and interaction in shared virtual environments (pp. 205–220). London:Springer.

Stansfield, S., Miner, N., Shawver, D., & Rogers, D. (1995). An application of shared virtual reality to situationaltraining. In: Proceedings of virtual reality annual international symposium (pp. 156–161).

Steed, A., Slater, M., Sadagic, A., Tromp, J., & Bullock, A. (1999). Leadership and collaboration in virtual

environments. IEEE Virtual Reality, 58–63.Steuer, J. (1992). Defining virtual reality: Dimensions determining telepresence. Journal of Communication, 42(4),

73–93.West, A., & Hubbold, R. (2001). System challenges for collaborative virtual environments. In: E. Churchil, D.

Snowdon, A. Munro (Eds.), Collaborative virtual environments: Digital places and spaces for interaction (pp.43–54).

Witmer, B. G., & Singer, M. J. (1998). Measuring presence in virtual environments: A presence questionnaire.Presence: Teleoperators and Virtual Environments, 7, 225–240.

Youngblut, C., & Huie, O. (2003). The relationship between presence and performance in virtual environments:Results of a VERTS study. In Proceedings of the IEEE VR.

Zhao, S. (2004). Toward a taxonomy of copresence. Presence: Teleoperators and Virtual Environments, 12,445–455.