12
Paper Activation of a Co-located Meeting by Visualizing Conversation and Facilitating Participant’s Behavior* Hiroyuki Adachi †‡ , Seiko Myojin †§ and Nobutaka Shimada Although there are a lot of opportunities to have a multi-party conversation such as decision- making meeting and brainstorming, we do not always have a good communication. In multi-party conversation, it sometimes ends up as one-way conversation because of someone’s too much speaking, and people cannot understand a less-speaking person’s thought. Considering the situations where a balanced participation is desirable, the amount of speaking time should be well controlled. We propose a co-located meeting support system called ScoringTalk that activates conversation by vi- sualizing conversation state in real-time, providing scores based on the amount of conversation, and facilitating participant’s behavior such as speaking, listening, and changing a conversation partner to make a balanced conversation. In ScoringTalk, each participant has a tablet with dual camera, and it works as both sensor and display. Tablets are commonly used in social meetings, therefore, there are no need to prepare projectors and microphones. The evaluation result showed that our system measured conversation, provided the visualization of conversation, scores, and facilitation, and increased the balance of participation while keeping the amount of speaking time. 1. Introduction Although there are a lot of opportunities to have a conversation such as meeting and brainstorming, we cannot always have a good communication. In an ideal communication, everyone reaches an accept- able conclusion, and understands each other. On the other hand, in a real communication, it sometimes ends up as one-way conversation because of some- one’s too much speaking, and people cannot under- stand a less-speaking person’s thought. Considering the situations where a balanced participation is de- sirable, such as brainstorming and decision-making meeting, the amount of speaking time should be well controlled[4,5,11,15]. We have developed a co-located meeting support system called ScoringTalk that gamifies conversation based on the amount of speaking time and listening time for balancing of participation[1]. It measures a conversation through the tablets with front and back cameras, and scores each participant’s speaking indi- vidually. The score increases and decreases by the rules of the game; the score increases when a partic- ipant speaks and listens, and decreases when a par- ticipant speaks too much. In the system, participants * Manuscript Received Date: October 19, 2016 College of Information Science and Engineering, Rit- sumeikan University; Kusatsu, Shiga 525-8577, JAPAN Presently with Yahoo Japan Corporation. § Presently with Osaka University. Key Words : CSCW, meeting support, visualization, facilitation, gamification, tablet. have a conversation while trying to maximize their scores, and it makes a balanced conversation. In this paper, we propose scoring methods for en- hancing group activity for individual utterance and mutual conversations. We also propose a facilitation method to increase the variety of opinions for having a better conversation by making all the participants to have a chance to speak. The facilitation suggests next speaking and listening behavior to each partici- pant based on a current state and history of the con- versation, for example, encouraging a participant to give an opinion, suppressing too much speaking, and changing a conversation partner. It is because that some participants did not know what behavior would be made a conversation better only with the visual- ization of conversation and the scores in our earlier experiments. We expect this system activates a con- versation and improves the quality of the meeting. The following are the main contributions of this study. Proposes a meeting support system called Scor- ingTalk that activates conversation by visual- izing conversation in real-time, providing scores based on the amount of conversation, and facili- tating participant’s speaking/listening behavior to make a balanced conversation. Develops a tablet system that measures utter- ance of each participant by using tablet’s front and back cameras simultaneously. It measures the position and orientation of the participant’s face and estimates the speaking state by im- – 17 – Transactions of the Institute of Systems, Control and Information Engineers, Vol. 30, No. 11, pp. 427–438, 2017 427

Activation of a Co-located Meeting by Visualizing

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Transactions of ISCIE, Vol. 30, No. 11, pp. 427–438, 2017 427

Paper

Activation of a Co-located Meetingby Visualizing Conversation and FacilitatingParticipant’s Behavior*

Hiroyuki Adachi†‡, Seiko Myojin†§ and Nobutaka Shimada†

Although there are a lot of opportunities to have a multi-party conversation such as decision-making meeting and brainstorming, we do not always have a good communication. In multi-partyconversation, it sometimes ends up as one-way conversation because of someone’s too much speaking,and people cannot understand a less-speaking person’s thought. Considering the situations wherea balanced participation is desirable, the amount of speaking time should be well controlled. Wepropose a co-located meeting support system called ScoringTalk that activates conversation by vi-sualizing conversation state in real-time, providing scores based on the amount of conversation, andfacilitating participant’s behavior such as speaking, listening, and changing a conversation partnerto make a balanced conversation. In ScoringTalk, each participant has a tablet with dual camera,and it works as both sensor and display. Tablets are commonly used in social meetings, therefore,there are no need to prepare projectors and microphones. The evaluation result showed that oursystem measured conversation, provided the visualization of conversation, scores, and facilitation,and increased the balance of participation while keeping the amount of speaking time.

1. Introduction

Although there are a lot of opportunities to havea conversation such as meeting and brainstorming,we cannot always have a good communication. Inan ideal communication, everyone reaches an accept-able conclusion, and understands each other. On theother hand, in a real communication, it sometimesends up as one-way conversation because of some-one’s too much speaking, and people cannot under-stand a less-speaking person’s thought. Consideringthe situations where a balanced participation is de-sirable, such as brainstorming and decision-makingmeeting, the amount of speaking time should be wellcontrolled[4,5,11,15].

We have developed a co-located meeting supportsystem called ScoringTalk that gamifies conversationbased on the amount of speaking time and listeningtime for balancing of participation[1]. It measures aconversation through the tablets with front and backcameras, and scores each participant’s speaking indi-vidually. The score increases and decreases by therules of the game; the score increases when a partic-ipant speaks and listens, and decreases when a par-ticipant speaks too much. In the system, participants

∗ Manuscript Received Date: October 19, 2016† College of Information Science and Engineering, Rit-

sumeikan University; Kusatsu, Shiga 525-8577, JAPAN‡ Presently with Yahoo Japan Corporation.§ Presently with Osaka University.Key Words: CSCW, meeting support, visualization,

facilitation, gamification, tablet.

have a conversation while trying to maximize theirscores, and it makes a balanced conversation.

In this paper, we propose scoring methods for en-hancing group activity for individual utterance andmutual conversations. We also propose a facilitationmethod to increase the variety of opinions for havinga better conversation by making all the participantsto have a chance to speak. The facilitation suggestsnext speaking and listening behavior to each partici-pant based on a current state and history of the con-versation, for example, encouraging a participant togive an opinion, suppressing too much speaking, andchanging a conversation partner. It is because thatsome participants did not know what behavior wouldbe made a conversation better only with the visual-ization of conversation and the scores in our earlierexperiments. We expect this system activates a con-versation and improves the quality of the meeting.

The following are the main contributions of thisstudy.

• Proposes a meeting support system called Scor-ingTalk that activates conversation by visual-izing conversation in real-time, providing scoresbased on the amount of conversation, and facili-tating participant’s speaking/listening behaviorto make a balanced conversation.

• Develops a tablet system that measures utter-ance of each participant by using tablet’s frontand back cameras simultaneously. It measuresthe position and orientation of the participant’sface and estimates the speaking state by im-

– 17 –

 Transactions of the Institute of Systems, Control and Information Engineers, Vol. 30, No. 11, pp. 427–438, 2017 427

428 Transactions of ISCIE, Vol. 30, No. 11 (2017)

age processing, because the tablet’s microphonecontains noise of the fan and other participants’voice. The system integrates observations fromeach participant’s tablet, and provides real-timevisual feedback on each tablet.

• Evaluates the performance of ScoringTalk in asmall group meeting with and without the feed-back (visualization and facilitation), and showsthat such feedback affects the social dynamicsof the conversation.

2. Related Work

The visual feedback systems with a shared ta-ble using microphones and projectors are popular.Bergstrom et al.[2] proposed Conversation Clock thatvisualizes personal cumulative speaking time and theduration of the current turn on the table. Ohshimaet al.[3] proposed a tabletop system called TableTalk-Plus that visualizes the dynamics of communicationby crowd of lights. It motivates the participant totalk, changes the direction of the conversation, anddesigns the field of conversation. Terken et al.[4] pro-posed a tabletop system that represents amount ofspeaking time and listening time as circles projectedin front of the participants. The size of the circlesincreases when a person is speaking and listening. Itgives a positive feedback about speaking and listeningbehavior.

The visual feedback systems with wall displays arealso popular. DiMicco et al.[5] proposed a systemthat shows bar chart indicating relative amount ofspeaking time on a shared display so that the partic-ipants can know they are over participated or underparticipated. Tausch et al.[6] presented Groupgardenwhich is a metaphorical group mirror providing feed-back about individual as well as group performance.The system changes its visualization based on the par-ticipation, for example, the petals of the flower fill upwhen a participant contributes ideas. Takabayashi etal.[7] proposed a discussion support system using vi-sualization of utterance intentions for group decisionmaking.

There are other interaction systems using hand-held devices. Meeting Mediator[8] is a portable sys-tem that detects social interactions using electronicsensing device and provides visual feedback on a mo-bile screen for enhancing group collaboration. Fujitaet al.[9] proposed Ambient Suite which is a room-shaped information environment that enhances com-munication in a standing party situation. In this sys-tem, people have a cup-shaped device and ware somesensors, and the sensors measure participant’s speak-ing time, hand acceleration, and head rotation. Theyrepresented the atmosphere’s activity using a combi-nation of these cues.

The visual feedback systems for co-located multi-party conversation are popular (e.g.[2,3,5]), however,our system provides not only visualization but alsoscores for enhancing conversation with gamification

and facilitation messages for suggesting participants’speaking/listening behavior. Terken et al.[4] proposeda visual feedback based on participants’ speaking timeand listening time. Their system has only positivefeedback that the size of projected circles increasesalong with the increase of the amount of speaking/ lis-tening time.

In the literature of discussion analysation and sup-porting, it is thought to be desirable that all of theparticipants give opinions for increasing the varietyof opinions, not only from a particular participant indecision-making meeting and brainstorming, so thatthe participation of the group members should be wellcontrolled. Bachour[15] pointed out and verified thatmember’s unbalanced paticipation leads decreasingof motivation in collaborative learning. DiMicco[5],Terken[4] and Schiavo[11] all together mentioned thatbalancing paticipant’s total utterlance time is impor-tant for collaborative decision making, which is anopinion-convergent type discussion, not only for brain-storming which is an opinion-divergent one. Recentlyfacilitator, who coordinates an active discussion, be-comes popular and has an important role in discus-sion management. Bachour[15] pointed out that fa-cilitators should have a skill to “ensure that all teammembers contribute” based on Burns[16].

For balancing each member’s paticipation, it isnecessary not only to promote someone’s speaking,but also to change a dominant participant’s behav-ior. Therefore, our system employs positive and alsonegative feedback. It scores balance of participationbased on rules of the game: the score increases on abalanced conversation, otherwise the score decreases.Groupgarden[6] has a negative feedback, however, thefeedback of the system is controlled by a moderatorby Wizard of Oz[10], and actually the system doesnot provide any automatic feedback by itself. Schiavoet al.[11] proposed an automatic facilitation systemusing Kinect. It monitors the group members’ non-verbal behavior and promotes balanced participationby giving targeted directives to the participants basedon a current state of conversation.

On the other hand, our system measures the par-ticipants’ verbal behavior: how long who speaks towhom and when, and records the history of such in-formation. Our system automatically provides the fa-cilitation suggesting participants’ speaking/listeningbehavior based on not only a current state of conver-sation but also a history of conversation. The positiveand negative feedback messages can lead conversa-tions not only to be more balanced but also converselyto focus a specified participant for sharing his/heropinions more.

In addition, our system has the advantage of beingeasy to use, because the system only requires tabletsor smartphones, and they work as both sensor anddisplay. Tablets/smartphones are one of the handhelddevices used in the previous researches[8,9], however,which are more popular than those devices and com-

– 18 –

Trans. of ISCIE, Vol. 30, No. 11 (2017)

Adachi, Myojin and Shimada: Activation of a Co-located Meeting by Visualizing Conversation 429

Fig. 1 A group conversation using ScoringTalk. A con-versation is monitored through the tablets whilethe system provides the real-time feedback on thetablets’ screens

monly used in social meetings for displaying meetingmaterials. There are no need to prepare other specialthings such as projectors, microphones, and sensors.Therefore, the system can be set up easier than mostof the previous visual feedback systems.

3. ScoringTalk

3.1 ConceptsWe propose a meeting support system called Scor-

ingTalk that activates co-located small group meetingby providing feedback about balance of conversation.Fig. 1 shows a three-person conversation using Scor-ingTalk. People have tablets with front and back cam-eras and talks around a table which has a marker onit. A conversation is monitored through the tabletswith dual camera, and the system provides real-timefeedback on the tablets’ screens.

The concepts of the system are the following.

• Sensing and visualizing device for multi-party conversationWe propose a system using tablets which arepopular in today and commonly used in socialmeetings. Tablets often have dual camera (afront camera and a back camera), and the sys-tem uses both cameras simultaneously for sens-ing the conversation and provides visual feed-back on each tablet’s screen. There is no need toprepare special things such as microphones andprojectors while most of previous works requiressuch things for measurement and visualization,therefore our system has the advantage of beingeasy to use. In addition, the system can be usedin a conversation where participants may needto move since the participant and the device aretogether.

• Incentive design for multi-party conver-sationWe introduce a game element into the conver-sation to motivate a participant and control abalance of amount of conversation. We proposea scoring method based on the amount of user’sspeaking time and listening time for balancing

Fig. 2 System structure. The system consists of tablets,a communication server, and a facial measure-ment server. They transfer messages via wireless

of participation. We expect that providing areal-time feedback of the score encourages theparticipants to speak more and gives the par-ticipants chance to consider lack of balance ofparticipation. We also propose a facilitationmethod for helping a behavior of a participantbased on a current participation and a transi-tion of conversation. It enables a participant toconsider a concrete behavior for a better conver-sation and makes a conversation more active.

3.2 ImplementationWe implement the system as shown in Fig. 2. We

use Windows 8.1 tablets and develop the applicationfor them with Unity1. A communication server isrunning on Node.js2 and a facial measurement serveris running over ZeroMQ3.

In the following section, we describe the methodsfor measuring the conversation, for scoring based onthe amount of conversation, for visualizing conversa-tion, and for facilitating participants’ behaviors.

3.2.1 MeasurementsIn this section, we describe the methods for mea-

surement of individual utterance and for measure-ment of group conversation as a whole.

There are three pieces of information of individualutterance: who speaks, when speaks, and from whereto where. Each tablet obtains them in the followingway by using its front and back cameras, and sendsthem to the communication server in every frame.(1) Who speaks: Each user has a tablet and is con-

nected to the communication server via wirelesswith an ID. A current speaker is specified bythis ID.

(2) When speaks: A user’s mouth movement is cap-tured by the front camera of the tablet which theuser has. When the change rate of user’s mouthopen level exceeds the predetermined threshold,the system recognizes that the user is speaking.If the user’s face is not found, the system under-stands the user is not speaking. The balancederror rate of the speak detection in the idealcondition is about 10%.

1A platform for creating games and interactive ex-perience.

2A server-side JavaScript environment.3A cross-platform, cross-language messaging library.

– 19 –

430 Transactions of ISCIE, Vol. 30, No. 11 (2017)

Fig. 3 Geometric relation of user-tablet-marker

(3) From where to where: A user’s position andthe face direction in the world coordinate de-termined by markers on the table are calculatedfrom the captured images by the front and backcameras of the tablet and geometric relation ofuser-tablet-marker (see Fig. 3). The homoge-neous transformation matrix mTf represents theabove information. It is obtained by multiplyingthe following three matrices as

mTf =(bcTm)−1 bcTfcfcTf . (1)

First, bcTm is measured from a back camera im-age by using NyARToolkit1. Second, bcTfc iscalibrated in advance since the relative positionof the two cameras on a particular tablet is fixed.Last, fcTf is measured from a front camera im-age by using OKAO R⃝ Vision[12]. The accuracyof the estimation of the face direction in horizon-tal has a margin of error of plus or minus twodegrees[13]. Tomioka et al. proposed a pseudosee-through tablet by employing the front andback cameras in the similar framework[14].

The system specifies who exists where, where he/shefaces to, and when he/she speaks as described above.The system integrates these observations on the server,and estimates a conversation partner of each userthrough the following steps (see Fig. 4).(1) Calculate a vector Ui as a face direction of user

i.(2) Calculate a vector Vij which directs from user i

to user j.(3) Calculate a similarity of Ui and Vij , Simij , as;

θ= cos−1 Ui ·Vij

∥Ui∥∥Vij∥,

Simij =

{cosθ if|θ|<θthresh0, otherwise,

(2)

where θthresh represents a maximal angular dif-ference between Ui and Vij when user i stares touser j for speaking or listening to him/her. Hereθthresh is experimentally fixed as π/6 by consid-ering measurement error and is independent ofthe number of participants.

(4) Select a user j with maximum Simij as a con-versation partner of the user i. If Simij =0 then

1A library for AR application.

Fig. 4 Conversation partner estimation. The conversa-tional partner is decided by the participants’ po-sition and the face direction

the user i has no conversation partner.Finally, the system obtains how long a person speaksto the other person and how long the person watcheshim/her as the statistical profiles of conversation.

3.2.2 ScoringIn this section, we present two scores in brain-

storming for activating the group activity. We de-signed two indices for evaluating group activity withthe following considerations;

• Maximize the amount of individual utterance.We consider active participation leads idea gen-eration.

• Balance the amount of pair conversation. Weconsider having a conversation with variety ofpeople increases the variety of ideas.

We define the index of balance of individual ut-terance as Eq.(3), applying Gini coefficient as a mea-sure of equality. Similar metrics inspired by the Ginicoefficient have already been adopted for measuringparticipation equality in group conversations[4,11].

I(t)indiv =

n

2(n−1)

n∑i

����Utterance(t)i − 1

n

����. (3)

Utterance(t)i is a relative amount of utterance of user i

at time t, and n is the number of users. I(t)indiv becomes

0 where the amounts of individual utterances are bal-anced, and becomes 1 where they are unbalanced. Wealso define the index of balance of pair conversationas Eq.(4), as similar as Eq.(3).

I(t)pair=

nC2

2(nC2−1)

∑i,j

����Conversation(t)ij − 1

nC2

����.(4)

Conversation(t)ij is a relative amount of pair conver-

sation between user i and user j at time t, and nC2

means the number of combination of two participants.The indices are updated on every frame based on

the measurements obtained within 30 seconds. Wecalculate the weighted average of each index in timewindow T , and convert it to the score with a scale of0% to 100% as Eqs.(5),(6).

– 20 –

Trans. of ISCIE, Vol. 30, No. 11 (2017)

Adachi, Myojin and Shimada: Activation of a Co-located Meeting by Visualizing Conversation 431

Fig. 5 The visual feedback on users’ devices with anno-tations. The left area shows the visualization ofthe conversation The right area shows the scoresand bar charts representing group activity

Score(t)indiv =100

(1−

∑Ts=1sI

(t−T+s)indiv∑T

s=1s

). (5)

Score(t)pair =100

(1−

∑Ts=1sI

(t−T+s)pair∑T

s=1s

). (6)

The system provides these scores to the users inreal-time through the tablets’ screens so that the userscan control their speaking.

3.2.3 VisualizationFig. 5 shows an example of the visual feedback.

The left area shows a visualization of conversation.It represents a situation where user A is speaking touser C. A circle represents a user’s position, and adotted line represents a user’s face direction. A pur-ple bar beside a circle represent a cumulative speakingtime from a user to another, and a red bar representsa cumulative watching time. The right area shows thegroup activity. The bar charts represent the relativeamount of individual utterances and pair conversa-tions, and the below numbers show those scores.

This visualization makes the users understand theiramount of conversation, and may provide motivationfor them, for example, to speak to someone who hasnot so much talked with. The visualization is cur-rently on table-centric view, we also investigate a user-centric view visualization.

3.2.4 FacilitationIn this section, we present the facilitation mech-

anism of ScoringTalk in three-person conversation.This facilitation aims to control the amount of con-versation for increasing participation of all. The facil-itation messages are provided by the following steps.First, the system labels each participant every framewith the three states of a balance of participation:Over-participated A participant’s relative speak-

ing time is more than 50%.Under-participated A participant’s relative speak-

ing time is less than 20%.Almost equal Otherwise (a participant’s relative speak-

ing time is between 21-49%).Second, the system determines a state of the groupbased on the result of labeling of the participants. If

there is an over-participated person, then the state ofthe group is over-participated, else if there is an under-participated person, then the state of the group isunder-participated, otherwise, the state of the group isalmost equal. Last, the system provides the followingfacilitation messages every 30 seconds depending onthe state of the group as shown in Fig. 6.Over-participated In this case, the system tries to

make a less-speaking participant have a chanceto speak and control an over-participated per-son’s speaking. The system provides a message,“How about to be a listener?” to the partici-pant who labeled over-participated. The systemprovides a message, “Let’s share your idea!” tothe other participants.

Under-participated In this case, the system triesto encourage a less-speaking participant to speakand propose the other participants to have aconversation with him/her. We consider thereare two ways to have a conversation: makingan opportunity to converse with by speaking tohim/her or hearing from him/her. The systemprovides a message, “Let’s share your idea!” tothe participant who labeled under-participated.The system randomly choices and provides theone of the messages, “How about speaking to (aunder-participated person’s name)?” and “Howabout hearing the idea of (a under-participatedperson’s name)?” to the other participants.

Almost equal In this case, the system tells the par-ticipants that the conversation has been wellbalanced and suggests the participants to changeconversation partners for increasing the varietyof opinions. The system randomly choices andprovides the one of the messages, “OK! Keep itup!” and “How about speaking to (a name of aperson who has not talked with so much)?” toall the participants.

Schiavo et al.[11] proposed a facilitation systembased on a balance of participation like this, howevertheir facilitation is based on a current state of con-versation. Our system obtains a current state of con-versation: how long who speaks to whom in a recenttime span, and history of conversation: how long whospeaks to whom and when and the amount of con-versation time. Therefore, the system can not onlypromote of speaking but also suggest a participantfor having a conversation with the other participantwho has not talked with so much.

By providing the facilitation messages, the partic-ipants will be able to understand the timing of speak-ing and listening for a better conversation, while theparticipants may not understand what the behaviorwill be made a better conversation where the systemproviding only the visualization of conversation. Inaddition, the facilitation gives a participant chanceto notice that the participant has been talking only aspecific participant, and the participant will be ableto change his/her behavior to speak to the other par-

– 21 –

432 Transactions of ISCIE, Vol. 30, No. 11 (2017)

Fig. 6 Facilitation messages on the three states of a balance of participation

(a) At the earlier time of the session

(b) At the later time of the session

Fig. 7 The examples of the screenshot of tablet interface

ticipant.Fig. 7(a) shows an example of the screenshot of

tablet interface at the earlier time of the session andFig. 7(b) shows an example of the screenshot of tabletinterface at the later time of the session. The facili-tation messages are showed in the lower-right of theinterface as blue texts. The two messages are differ-ent, and they show that the system provides the facil-itation messages based on the situation at the time.

4. Evaluation

We evaluated ScoringTalk to examine whether itcan balance participation and what was the effect of

the feedback. In this section, we describe the set-up,the results, and the discussion of the test.

4.1 Methods4.1.1 ParticipantsTo evaluate the system, we conducted a study of

four groups of three participants each, a total of 12participants (11 males, 1 female, 22-25 years old). Allthe participants were the graduate or undergraduatestudents of the author’s laboratory and all of themalready knew each other.

4.1.2 Experimental TaskTo verify the effectiveness of the system, we per-

formed a within-subjects experiment. Each groupparticipated in two discussion sessions. In each dis-cussion, the members had to express opinions on aparticular topic. In one discussion session, partici-pants obtained feedback (with feedback condition); inthe other condition, the feedback were not provided(without feedback condition).With feedback condition The system provides feed-

back about the scores and the facilitation mes-sages on the participants’ tablets.

Without feedback condition The scores and fa-cilitation messages are not provided.

Fig. 8 shows the visualization on both conditions. Thescores on the upper-right and the facilitation mes-sages on the lower-right are disappeared from the vi-sualization on without feedback condition.

4.1.3 ProcedureAt the start of the experiment, participants had

an instruction of usage of the system; how to mea-sure a speaking state and the meaning of the visual-ization. Then the group first had a warm-up practicefor about three minutes in order to get used to thesystem. After the warm-up practice, the participantshad seven minutes conversation for each of two topics:Topic 1)“Where is the best location do you think fora new convenience store?” and Topic 2) “What goodsdo you want to handle at a convenience store?”, withand without feedback. Topic 1 is about the selectionof the best choice from discussion, which can be classi-fied as opinion-convergence type. Topic 2 is about thepick up of several possiblities, which can be classifiedas opinion-divergence type. The opinion-convergence

– 22 –

Trans. of ISCIE, Vol. 30, No. 11 (2017)

Adachi, Myojin and Shimada: Activation of a Co-located Meeting by Visualizing Conversation 433

(a) With-feedbackcondition

(b) Without-feedbackcondition

Fig. 8 The difference of the visualization of both condi-tions

type discussion tends to be lead by an opinion leaderand thus other participants may hesitate or feel diffi-culty to present their opinions. Therefore we targetedTopic 1 session to balance the user utterances by giv-ing feedback messages to the participants. Topic 2session is regared as the baseline to evaluate the effectof the feedback facilitation. To avoid order effects, theorder of conditions was balanced across groups.

After two conversation role-plays, the participantswere asked to fill in a questionnaire. In the experi-ment, we used three tablets (Sony VAIO Duo 11 x2,Microsoft Surface Pro 3 x1) and a PC with Intel Corei7 3.4GHz as a server.

4.1.4 Evaluation MetricsWe evaluate the effectiveness of the system in the

following three metrics.(1) Average balancing degree of amount of

utterance: The system provides the balacingscores (Eqs.(5),(6)) to the particiants for bal-ancing the amount of utterance based on gam-ification. It is desirable that the average scoreon with-feedback condition is higher than thaton without-feedback condition since it suggeststhat the participants keep their participationsbalanced. Therefore, we compare the transitionof the scores between with-feedback conditionand without-feedback condition.

(2) Average total speaking time: Providing feed-back of balancing degree may disturbs naturalconversations of participants because the bal-ancing feedback will too much suppress the par-ticipants speaking. We compare the differenceof total speaking time between with-feedbackand without-feedback condition to measure thisnegative effect.

(3) Results of four types of questionnaires:First, we compare the average scores of the ques-tionnaire on 7-scale in the two conditions: with-feedback and without-feedback (the result willbe shown in Fig. 12). Second, we analyze par-ticipant’s behavior when he/she is given a facil-itation message to change a conversation target(the result will be shown in Fig. 13). Third, weanalyze what element was fun for the partici-pants on with-feedback condition (the result willbe shown in Fig. 14). Last, we evaluate what

Fig. 9 Individual utterance scores per frame sorted inascending order for all 8 discussion sessions (red:with-feedback, green: without-feedback)

items displayed on the tablet are often lookedat and whether the items are useful or not (theresult will be shown in Fig. 15).

4.2 Results and Discussion4.2.1 The Scores and Amount of Conver-

sationFour groups participated in the evaluation experi-

ment. Figs. 16-23 show the score transitions for 8 ex-perimental discussion sessions (2 discussion topics by4 groups). For each figure, the upper graph shows theutterance ratio of each user over the session time. Thelower graph shows the two scores of individual utter-ance (red line) and pair conversation green line). Thescore value has 0 to 100 and the higher value meansthat the utterance and conversation were higher bal-anced.

For more clear comparison between with- and without-feedback, we show Fig. 9 in which all scores of individ-ual utterance per time frame are sorted in ascendingorder. The red line is for with-feedback and greenone is for without-feedback session. It is apperentlyshown that the scores of individual utterance with-feedback are higher than ones without-feedback. Wealso verified the effect by statistical way, Wilcoxon’srank-sum test and then it was shown that the differ-ence is significant with p-value < 10−6.

On the other hand, the improvement of the scoresof pair conversation was not proven. The scores perframe sorted in ascending order like Fig.9 is shownas Fig.10, in which the scores for with-feedback (redline) and ones for without-feedback intersects eachother and Wilcoxon’s test’s p-value is approximately0.5.  

Fig. 11 shows the difference of total speech timeamount between with- and without-feedback condi-tions. The left-hand and center graphs represent theaverage total speaking time of each participant forboth conditions. The right-hand graph representsthe average difference of total speech times for eachuser. The figure shows the total speech time is anearly equal amount for both conditions. We couldnot confirm that the total speaking time increaseson with feedback condition, however at least, it sug-gests that the feedback cacilitation message did not

– 23 –

434 Transactions of ISCIE, Vol. 30, No. 11 (2017)

Fig. 10 Pair conversation scores per frame sorted in as-cending order for all 8 discussion sessions (red:with feedback, green: without feedback)

Fig. 11 Amount of speaking time of with and withoutfeedback: left) average total time per user, right)average difference of them. Bars mean SD

supresse user’s speaking oppotunity. The balance de-gree of utterance amount on with-feedback conditionwas higher than that of without feedback condition,and the values of the average amount of speakingtime on both conditions have about the same amount,therefore the feedback could balance the utteranceamount without giving negative effects to the totalspeaking time.

4.2.2 Questionnaire and InterviewFig. 12 shows the result of the questionnaire in

7-point Likert scale with and without feedback con-ditions. A Wilcoxon signed-rank test indicates thatQuestion 1 (I found that controlling amount of utter-ance was easy), Question 5 (I felt that I could concen-trate on the conversation), and Question 6 (I felt tochange a conversation partner) are statistically signif-icant at the 5% level. The results of Question 1 andQuestion 6 show that the feedback works well. Ac-cording to the users’ comments on the questionnaire,there are some positive impressions for them, for ex-ample, “I could have the balance of participation onmy mind on with feedback condition.” On the otherhand, the result of Question 5 shows that the feed-back makes negative impression for the participants,for example, one of the participants felt that “I couldnot speak in timing of my choice because my attentionmoved to the scores.” It suggests that the feedbackmay put a restriction on the participant’s behavior.We will need to work to improve this point.

On Question 6, 11 of 12 participants answeredthat they felt to change a conversation partner with

Fig. 12 Average scores in comparison between with-feedback and without-feedback condition

Fig. 13 The answer of the question about the partici-pant’s behavior when he/she was given a facili-tation message to change a conversation target

Fig. 14 The answer of the question about fun of the con-versation on with feedback condition

Fig. 15 Number of votes for interface items often lookedat (green frame: useful, red frame: not useful)

feedback, and 1 of 12 participant answered that hedid not feel so. Fig. 13 shows the questionnaire resultabout the participants’ behavior when they tried tochange a conversation partner. The most of the par-ticipants spoke to another person when they felt tochange a conversation partner on with feedback con-

– 24 –

Trans. of ISCIE, Vol. 30, No. 11 (2017)

Adachi, Myojin and Shimada: Activation of a Co-located Meeting by Visualizing Conversation 435

Fig. 16 Score transition for Group 1 (Topic 1 with FB)

Fig. 17 Score transition for Group 1 (Topic 2 without FB)

Fig. 18 Score transition for Group 2 (Topic 1 with FB)

Fig. 19 Score transition for Group 2 (Topic 2 without FB)

Fig. 20 Score transition for Group 3 (Topic 1 with FB)

– 25 –

436 Transactions of ISCIE, Vol. 30, No. 11 (2017)

Fig. 21 Score transition for Group 3 (Topic 2 without FB)

Fig. 22 Score transition for Group 4 (Topic 1 with FB)

Fig. 23 Score transition for Group 4 (Topic 2 without FB)

dition. Some of the participants made the other par-ticipants chance to speak and encouraged the otherparticipants to have a conversation. The result sug-gests that the feedback have enough effect to providefacilitation for changing the participants’ behavior.

Fig. 14 shows the questionnaire result about funof conversation on with feedback condition. It showsthat 8 of 12 participants felt that it was interestingthat their behavior made the scores change, and italso shows that 6 of 12 participants felt that it wasinteresting that the conversation had game elements,for example, increasing/decreasing of the scores andbar charts. Two participants choose “Others”, andanswered that “The advice from the system was use-ful,” and “It was helpful that I could take my con-versation objectively.” However, there are some neg-ative impressions, for example, “I could not concen-trate on the conversation when I tried to aim at thehigh score.” We hope to make improvements to makethe participants more interesting in scoring since thecurrent system just shows the scores.

Fig. 15 shows the number of votes for interfaceitems which the participants were often looking at.The green frames represent that these items were use-ful, and the red frames represent that these items were

not useful. Most of the participant were interested in“Score of balance of amount of utterance” and “facil-itation message”, and answered that these feedbackwere useful. There are some positive impressions, forexample, “I felt that I could hear the others’ opinionssince the system provides facilitation message”, onthe other hand, some participant felt that they couldnot concentrate the conversation when they tried toaim at the high score. The participants answered that“horizontal bars of cumulative speaking and watchingtimes”, “front camera image”, “back camera image”,and “text indicating a speaking/listening state” werenot useful.

The results of the questionnaire suggest that thesystem with scoring for group and facilitation makesparticipants easier to control the amount of utteranceand also makes participants have a chance to changea conversation partner. However, there are some neg-ative impressions caused by the feedback, it need tobe examined the method for feedback of the scoresand the facilitation messages. There can be possibleimprovements for those problems like iconizing fre-quently shown messages or suppression of repeatingonce displayed messages for a while.

– 26 –

Trans. of ISCIE, Vol. 30, No. 11 (2017)

Adachi, Myojin and Shimada: Activation of a Co-located Meeting by Visualizing Conversation 437

5. Conclusion

We proposed ScoringTalk that supports activationof co-located meeting based on a balance of amount ofconversation. Most of previous computer-supportedmeeting systems are difficult to set up because theyrequire microphones and projectors. On the otherhand, our system is easy to use since each partici-pant has a tablet and it measures and visualizes aconversation. The system not only measures and vi-sualizes a conversation, but also provides the scoreswith game element and provides facilitation for par-ticipant’s behavior. We evaluated the system withand without feedback. The results showed that theproposed method has an effect for balancing of par-ticipation based on the balance degree of amount ofconversation. The questionnaire results also showedthat the participants felt the effect of the feedback forfacilitating of behavior and for controlling of timing ofutterance. However, there are some negative impres-sions that the participants could not concentrate onthe conversation because of paying attention to thefeedback, we need to improve it.

In this paper, we researched only the three-personconversation, the effectiveness of the feedback for four-or-more-person conversation should be researched. Thecurrent facilitation method depends on a three-personconversation, we should expand it for such a conver-sation. In addition, all the participants in the exper-iment were the students. The unbalanced participa-tion is likely to occur in a conversation with people indifferent position such as teachers and students. Wealso would like to use the system in such a situation,and evaluate the effectiveness of the system. Besides,we consider our system can be applied to educationand training systems for learning a way of discussion.Participants can know their amount of conversationthrough the visualization and can control the timingof speaking and listening by the scores and facilita-tion messages. By continuously using the system indiscussion, it may improve the ability of conversationtime management and facilitation skills of the partic-ipants. We would like to evaluate the system fromthis viewpoint in the future.

Acknowledgments

This work was supported by JSPS KAKENHI GrantNumber 15H02764 and by the foundation for the Fu-sion of Science and Technology. The authors wouldlike to thank the OMRON Corporation let us use theOKAO R⃝ Vision Library in their courtesy.

References

[1] H. Adachi, S. Myojin and N. Shimada: ScoringTalk:A tablet system scoring and visualizing conversationfor balancing of participation; SIGGRAPH ASIA2015 Mobile Graphics and Interactive Applications,ACM, pp. 9:1–5 (2015)

[2] T. Bergstrom and K. Karahalios: Conversation clock:

Visualizing audio patterns in co-located groups;System Sciences, 2007. HICSS 2007. 40th AnnualHawaii International Conference on, IEEE, pp. 78–78 (2007)

[3] N. Ohshima, K. Okazawa, H. Honda and M. Okada:TableTalkPlus: An artifact for promoting mutual-ity and social bonding among dia logue participants;Journal of Human Interface Society, Vol. 11, No. 1,pp. 105–114 (2009) (in Japanese)

[4] J. Terken and J. Sturm: Multimodal support for so-cial dynamics in co-located meetings; Personal andUbiquitous Computing, Vol. 14, No. 8, pp. 703–714(2010).

[5] J. M. DiMicco, A. Pandolfo and W. Bender: In-fluencing group participation with a shared display;Proceedings of the 2004 ACM Conference on Com-puter Supported Cooperative Work, ACM, pp. 614–623 (2004)

[6] S. Tausch, D. Hausen, I. Kosan, A. Raltchev and H.Hussmann: Groupgarden: supporting brainstorm-ing through a metaphorical group mirror on tableor wall; Proceedings of the 8th Nordic Conference onHuman-Computer Interaction: Fun, Fast, Founda-tional, ACM, pp. 541–550 (2014)

[7] T. Takabayashi, Y. Kamiya, K. Kunieda and K. Ya-mada: A discussion support system using visualiza-tion of utterance intentions for group decision mak-ing; IEICE Technical Report, Human InformationProcessing, Vol. 111, No. 60, pp. 43–48 (2011) (inJapanese)

[8] T. Kim, A. Chang, L. Holland and A. S. Pentland:Meeting mediator: enhancing group collaboration us-ing sociometric feedback; Proceedings of the 2008ACM Conference on Computer Supported Coopera-tive Work, ACM, pp. 457–466 (2008)

[9] K. Fujita, Y. Itoh, H. Ohsaki, N. Ono, K. Kagawa,K. Takashima, S. Tsugawa, K. Nakajima, Y. Hayashiand F. Kishino: AmbientSuite: enhancing commu-nication among multiple participants; Proceedingsof the 8th International Conference on Advances inComputer Entertainment Technology, ACM, p. 25(2011)

[10] J. F. Kelley: An empirical methodology for writ-ing user-friendly natural language computer applica-tions; Proceedings of the SIGCHI Conference on Hu-man Factors in Computing Systems, ACM, pp. 193–196 (1983)

[11] G. Schiavo, A. Cappelletti, E. Mencarini, O. Stockand M. Zancanaro: Overt or subtle? Supportinggroup conversations with automatically targeted di-rectives; Proceedings of the 19th international confer-ence on Intelligent User Interfaces, ACM, pp. 225–234 (2014)

[12] OMRON Corporation: Image Sensing Technol-ogy OKAO Vrsion; http://www.omron.com/ecb/

products/mobile/ (accessed 2016-01-17)[13] H. Adachi, S. Myojin and N. Shimada: Tablet system

for sensing and visualizing statistical profiles of multi-party conversation; Consumer Electronics (GCCE),2014 IEEE 3rd Global Conference on, IEEE, pp. 407–411 (2014)

[14] M. Tomioka, S. Ikeda and K. Sato: Approxi-mated user-perspective rendering in tablet-basedaugmented reality; Mixed and Augmented Reality

– 27 –

438 Transactions of ISCIE, Vol. 30, No. 11 (2017)

(ISMAR), 2013 IEEE International Symposium on,IEEE, pp. 21–28 (2013)

[15] K. Bachour, F. Kaplan and P. Dillenbourg: An in-teractive table for supporting participation balancein face-to-face collaborative learning; IEEE Transac-tion on Learning Technologies, Vol. 3, Issue 3, pp.203–213 (2010)

[16] G. Burns: The secrets of team facilitation; Training& Development, Vol. 49, No. 6, pp. 46–53 (1995)

AuthorsHiroyuki Adachi

He was born in 1991. He received the

B.E., M.E. degrees from Ritsumeikan Uni-

versity in 2014 and 2016. He is now an

employee of Yahoo Japan Corporation.

Seiko Myojin (Member)

She is a specially appointed researcher

at Osaka University. She is a member of

IPSJ, VRSJ, HIS, ISCIE, and The So-

ciety for Art and Science. She was pre-

viously a tokunin assistant professor at

Ritsumeikan University. Before joining

Ritsumeikan University, she was a specially appointed re-

searcher at Osaka University and experienced a short in-

ternship at NTT Cyber Space Laboratories. She obtained

a doctorate from Osaka University. She was also a special

research student at Nara Institute of Science and Tech-

nology over a period of time

Nobutaka Shimada (Member)

He received the B.E., M.E. and Ph.D

(Engineering) degrees from Osaka Uni-

versity in 1992, 1994, and 1997 respec-

tively. He was an assistant of Graduate

School of Engineering, Osaka University

from 1997 to 2002. In 2003, he was an

associate professor of Graduate School of Engineering, Os-

aka University. In 2004, he moved to College of Infor-

mation Science and Engineering, Ritsumeikan University

and is currently a professor (2011-). In 2007, he was en-

gaged in research as a visiting associate professor of the

Robotics Institute, Carnegie Mellon University. His re-

search interest includes computer vision, gesture interface

and interactive robots. He is a member of IEEE, IEICE,

IPSJ, JSAI, HIS, ISCIE, and RSJ.

– 28 –

Trans. of ISCIE, Vol. 30, No. 11 (2017)