10
Motivated learning agent model for distributed collaborative systems Rui Wang a , Xiangyu Wang a,b, * ,1 , Mi Jeong Kim b a Faculty of Architecture, Design and Planning, University of Sydney, Australia b Department of Housing and Interior Design, Kyung Hee University, Seoul, South Korea article info Keywords: Intelligent agent interaction Computer-supported cooperative work (CSCW) Remote collaboration abstract The paper develops and discusses a theoretical model for collaborative design systems based on moti- vated learning agents, however, with a novel self-development module to help the system improve itself. Self-Development Agent based on previous work is an intelligent agent, which not only receives informa- tion from sensors in the environment, but also gives valuable suggestions that could help to improve the system. Two case studies with different system setups are described in detail to help better understand this model. The theoretical model is not limited to the specific systems described in this paper, but could be adapted to other collaboration systems as well. Ó 2010 Elsevier Ltd. All rights reserved. 1. Introduction Collaboration is now pervasively believed to add values to indi- vidual work in various aspects such as time saving, cost reduction and effective problem solving (Tay & Roy, 2003; Wang & Tadisina, 2007). Despite of much research that has been done in this area, there emerge higher demands for collaboration systems with high level of intelligence. Existing models of intelligent agent have been discussed a lot. Those agent-based models provide solutions to many intelligent systems, especially computer-supported collabo- rative systems to facilitate remote collaboration. However, there are still limitations on current models. One limitation is that the agents in current models focus on the working process only. Those models only look into issues that related to users’ work, not the system itself. In other words, they cannot help developers of those intelligent systems to improve functions of systems. This paper introduces a theoretical intelligent agent model, with learning and self-development functions. An agent-based model (ABM) is a computational model for simulating the actions and interactions of autonomous individuals in a network, with a view to assessing their effects on the system as a whole (Axtell, Andrews, & Small, 2003). The theoretical model, which is the focus of this paper, is developed based on previous intelligent learning agent model, such as the Motivated Learning Agent model by Maher, Merrick, and Saunders (2007). The aim of this theoretical model is to facilitate collaboration between distributed designers through intelligent agents. 2. Background With the increasing capabilities and availabilities of smart sen- sors, the environment in which human live, work, and study is becoming more intelligent than ever. A vast number of agent-based tools and applications have been developed to support human communication and activities. Some of them are MavHome project (Cook, Youngblood, Edwin, Heierman, & Gopalratnam, 2003), a CAD virtual work platform (Maher, Liew, Gu, & Ding, 2005) by Maher et al. and IRoom system (Johanson, Fox, & Winograd, 2002), as de- scribed in the following paragraph respectively. MavHome (Managing an Intelligent Versatile Home), which was one of several smart home-related projects, aimed to build a com- fortable home for inhabitants with less amount of operational cost. It was built upon an agent-based architecture. Each agent is a self-contained component that could work independently or col- laborate with other agents if necessary in order to maximize and maintain comfort for the inhabitants (Cook et al., 2003). Most of its intelligence came from machine learning algorithms imple- mented in the system. These algorithms embedded in the agents could make decisions on feasible actions to be taken based on the information either directly gathered from the sensors or indi- rectly from other agents. For instance, a task-based Markov Model (Rabiner & Juang, 1986) was used to generate future actions based on the current state of the agents and historical action sequences. It could do things like turning up the heat early in the morning to warm the bedroom to optimal waking temperature when it sensed low temperature under certain threshold outside and previous records indicated that the owner usually would like to turn on 0957-4174/$ - see front matter Ó 2010 Elsevier Ltd. All rights reserved. doi:10.1016/j.eswa.2010.05.003 * Corresponding author at: Faculty of Architecture, Design and Planning, University of Sydney, Australia. Tel.: +61 290367128; fax: +61 293513031. E-mail addresses: [email protected] (R. Wang), [email protected] (X. Wang), [email protected] (M.J. Kim). 1 International scholar. Expert Systems with Applications 38 (2011) 1079–1088 Contents lists available at ScienceDirect Expert Systems with Applications journal homepage: www.elsevier.com/locate/eswa

Motivated learning agent model for distributed collaborative systems

Embed Size (px)

Citation preview

Expert Systems with Applications 38 (2011) 1079–1088

Contents lists available at ScienceDirect

Expert Systems with Applications

journal homepage: www.elsevier .com/locate /eswa

Motivated learning agent model for distributed collaborative systems

Rui Wang a, Xiangyu Wang a,b,*,1, Mi Jeong Kim b

a Faculty of Architecture, Design and Planning, University of Sydney, Australiab Department of Housing and Interior Design, Kyung Hee University, Seoul, South Korea

a r t i c l e i n f o a b s t r a c t

Keywords:Intelligent agent interactionComputer-supported cooperative work(CSCW)Remote collaboration

0957-4174/$ - see front matter � 2010 Elsevier Ltd. Adoi:10.1016/j.eswa.2010.05.003

* Corresponding author at: Faculty of ArchitecUniversity of Sydney, Australia. Tel.: +61 290367128;

E-mail addresses: [email protected] (R. Wang), x(X. Wang), [email protected] (M.J. Kim).

1 International scholar.

The paper develops and discusses a theoretical model for collaborative design systems based on moti-vated learning agents, however, with a novel self-development module to help the system improve itself.Self-Development Agent based on previous work is an intelligent agent, which not only receives informa-tion from sensors in the environment, but also gives valuable suggestions that could help to improve thesystem. Two case studies with different system setups are described in detail to help better understandthis model. The theoretical model is not limited to the specific systems described in this paper, but couldbe adapted to other collaboration systems as well.

� 2010 Elsevier Ltd. All rights reserved.

1. Introduction

Collaboration is now pervasively believed to add values to indi-vidual work in various aspects such as time saving, cost reductionand effective problem solving (Tay & Roy, 2003; Wang & Tadisina,2007). Despite of much research that has been done in this area,there emerge higher demands for collaboration systems with highlevel of intelligence. Existing models of intelligent agent have beendiscussed a lot. Those agent-based models provide solutions tomany intelligent systems, especially computer-supported collabo-rative systems to facilitate remote collaboration. However, thereare still limitations on current models. One limitation is that theagents in current models focus on the working process only. Thosemodels only look into issues that related to users’ work, not thesystem itself. In other words, they cannot help developers of thoseintelligent systems to improve functions of systems.

This paper introduces a theoretical intelligent agent model,with learning and self-development functions. An agent-basedmodel (ABM) is a computational model for simulating the actionsand interactions of autonomous individuals in a network, with aview to assessing their effects on the system as a whole (Axtell,Andrews, & Small, 2003). The theoretical model, which is the focusof this paper, is developed based on previous intelligent learningagent model, such as the Motivated Learning Agent model byMaher, Merrick, and Saunders (2007). The aim of this theoretical

ll rights reserved.

ture, Design and Planning,fax: +61 [email protected]

model is to facilitate collaboration between distributed designersthrough intelligent agents.

2. Background

With the increasing capabilities and availabilities of smart sen-sors, the environment in which human live, work, and study isbecoming more intelligent than ever. A vast number of agent-basedtools and applications have been developed to support humancommunication and activities. Some of them are MavHome project(Cook, Youngblood, Edwin, Heierman, & Gopalratnam, 2003), a CADvirtual work platform (Maher, Liew, Gu, & Ding, 2005) by Maheret al. and IRoom system (Johanson, Fox, & Winograd, 2002), as de-scribed in the following paragraph respectively.

MavHome (Managing an Intelligent Versatile Home), which wasone of several smart home-related projects, aimed to build a com-fortable home for inhabitants with less amount of operational cost.It was built upon an agent-based architecture. Each agent is aself-contained component that could work independently or col-laborate with other agents if necessary in order to maximize andmaintain comfort for the inhabitants (Cook et al., 2003). Most ofits intelligence came from machine learning algorithms imple-mented in the system. These algorithms embedded in the agentscould make decisions on feasible actions to be taken based onthe information either directly gathered from the sensors or indi-rectly from other agents. For instance, a task-based Markov Model(Rabiner & Juang, 1986) was used to generate future actions basedon the current state of the agents and historical action sequences. Itcould do things like turning up the heat early in the morning towarm the bedroom to optimal waking temperature when it sensedlow temperature under certain threshold outside and previousrecords indicated that the owner usually would like to turn on

1080 R. Wang et al. / Expert Systems with Applications 38 (2011) 1079–1088

heaters in similar conditions. Result from experiments shownextremely high accuracy in predicting inhabitant activities byMavHome (Cook et al., 2003).

Another project, named IRoom, was developed at Stanford Uni-versity presenting a smart office environment. It focused on HCI(Human–Computer Interaction) in an interactive meeting roomequipped with three touch sensitive wall displays, a tabletop, cam-eras, microphones, wireless network, and other interactive devices.It provided a novel workspace for meetings, brainstorming, anddesign sessions (Rabiner & Juang, 1986). Another research on sup-porting collaborative design, which was focused on an agent-basedapproach, was carried by Maher et al. (2005). They integrated CAD(Computer-Aided Design) software systems with 3D virtual envi-ronment. Changes made in CAD software could be tracked and up-dated seamlessly in 3D virtual world models by agents (e.g. wallagent in a building). In addition to that, avatars were used to rep-resent multiple designers in the virtual world. They could explorethe virtual world and manipulate 3D objects while they walkaround. For example, they could duplicate objects like trees/walls,move buildings from one location to another, and edit the appear-ance of an object via a dialog box. Designers would benefit consid-erably from this integration due to the shared visualization of themodel and the easy access of the user interface (Maher et al., 2005).

Maher et al. (2007) have also developed the motivated learningagent model based on their research. The model is shown in Fig. 1.

Other than previous reflex Agent models, their computationalmodel (Maher et al., 2007) adds Motivated Agent and LearningAgent. The motivation process takes information from both the

Fig. 1. Motivated learning agent m

Fig. 2. Intelligent learning agen

sensed environment and its own memory to trigger learning, plan-ning, action or other Agent processes (Maher et al., 2007). It createsgoals and stimulates action towards those goals. Agents are moti-vated to create goals to understand and repeat interesting events.The role of motivation process in this model is to provide signalsto direct the learning process (Maher et al., 2007). Learning Processwill encapsulate new knowledge as behaviors once an agent canrepeat an interesting event at will (Maher et al., 2007).

Russell and Norvig (2002) have grouped agents into five classesbased on their degrees of perceived intelligence and capabilities:simple reflex agents, model-based reflex agents, goal-based agents,utility-based agents and learning agents. They pointed out that thelearning agents allow the agents to initially operate in unknownenvironments and to become more competent than its initialknowledge alone might allow. Fig. 2 shows their concept model.

The environment sends percepts to learning agent sensors, andit goes to the critic module based on performance standard. Thefeedback from critic will go to learning element. After leaning,changes are sent to performance element, and knowledge is gener-ated and sent back to learning element. At the mean while, leaningelement also generates leaning goals and sends them to problemgenerator. Problems are to be solved through experiments andsolutions are sent to performance element. Actions that decidedby effectors will take place based on performance element and sentback to environment.

However, the focus of the learning agent model is the learningaspects during the working process and how to facilitate andbenefit the end users. It would not develop the system itself when

odel by Maher et al. (2007).

t (Russell & Norvig, 2002).

R. Wang et al. / Expert Systems with Applications 38 (2011) 1079–1088 1081

problems are found through the learning process. In another word,it will not benefit the developers or the systems themselves. In thiscase, they could only bring benefits to end users in short-terms – astime goes by, there will be new needs from end users but the sys-tems remain the same and cannot meet users’ new requirement.Therefore, it is possible and necessary to add a self-developmentfunction onto the learning model. The self-development functioncould gather the problems from the learning module, generatesolutions, and pass them to system developers. System developersget information from Self-Development Agent and send their feed-back on the solutions to the agent. After getting the confirmationon a solution, the system could be improved by itself. In thisway, it could benefit both the developer side, and future the enduser side. This self-development function is the main focus of thispaper, and distinguishes the new agent-based model described inthis paper from existing works.

Computer-based collaborative systems, which enable individu-als to communicate with each other, are widely used today. Thesesystems include many text-based (e-mail, instant messaging, blogsand wikis) and richer media tools (voice-over-IP (VoIP), voice mailand tele-conferencing). The concept of media richness pervadesdiscourse about virtual worlds (Short & Christie, 1976). It wasproposed by Daft and Lengel (1986) that the richer a medium(which means the more it transmits the ‘social presence’ of collab-orators), the more effectively it should substitute for face-to-faceinteraction.

In MRT (Media Richness Theory), it believes that the aim ofcommunication is to reduce uncertainty and equivocality, so thatcommunication efficiency can be improved. MRT also states thatuncertainty results from a lack of information, and equivocalityis related to negotiating methods for ambiguous situations. How-ever, Churchill and Bly claimed that virtual oriented media rich-ness was not a prerequisite for the creation of sufficient socialco-presence for maintaining collaborative relationships (Churchill& Bly, 1999). Moore, Ducheneaut, and Nickell (2006) suggestedthat avatars in virtual worlds are not necessarily based on users’physical context, although they should in some degree reflecttheir owners’ use of the user interface (UI). He recommended thatultimately designers should consider the trade-offs between pri-vacy and transparency and then make the decision on whatdegree of richness should be given to each type of UI (Mooreet al., 2006).

The impact of using voice in computer-supported collaborativesystems is recently becoming a popular research topic. When

Fig. 3. Theoretical agent model with self-developme

designing user interfaces of a shared virtual environment, the maingoal is to build a hierarchy of menus and functions that feels nat-ural and well-structured to the users and does not interfere with ormislead them (Shneiderman & Plaisant, 2004). Wadley, Gibbs, andDucheneaut (2009) presented a paper in OZCHI conference in 2009and discussed their experiment results about collaboration in theSecondLife environment, arguing that preferences for voice or textreflect a broader problem of managing social presence in virtualcontexts.

A number of researchers have suggested that media richnessmodalities should be considered when designing computer-sup-ported collaborative systems. However, little research has beendone in this area. Inspired by the agent-based intelligent environ-ment research mentioned before, this paper creates a theoreticalmodel that adopts the concept of learning agents and expends itwith self-development module, and then apply this model toMixed-Reality (MR) and Virtual Reality (AR) supported collabora-tive design environment. The theoretical model, especially theself-development function, is presented and described in detailsin the following sections.

3. Concept model

Based on model of Motivated Leaning Agent by Maher et al.(2007), and Russell and Norvig (2002) with consideration of MRTliterature, a module could be added to existing agent-based modelswith functions of self-development in the context of media rich-ness modalities. This section developed an extended model withself-development module. This model has the features and func-tions especially for, but not limited to collaborative design system.An example use of this model will be described in the followingsection.

Fig. 3 depicts the theoretical model, which contains both learn-ing and self-development functions in the virtual designenvironment:

In this theoretical model, it contains two parts of design envi-ronments: the physical environment and the virtual environment.Designers (that have been assigned to a design task) and systemdevelopers (who are in charge of the development and mainte-nance of the system) are working in the physical design environ-ment; however, they also interact with the virtual designenvironment, which contains the agent modules, including theself-development module. The next paragraph illustrates how doesthis model work.

nt module in collaborative design environment.

1082 R. Wang et al. / Expert Systems with Applications 38 (2011) 1079–1088

Designers in the physical design environment interact with vir-tual design environment and agent through human–computerinterface, such as keyboards, mice, voice input devices, gesturesensors, etc. After sensation, information could be sent to the‘‘compare and analysis” module rather than providing feedbackdirectly. This process will compare and analyze incoming informa-tion and then trigger learning agent by results (sensed interestingevents, especially repeat events), which could facilitate users byadapting to their design habits, and Self-Development Agent,which could help gather useful information for system self-devel-opment as shown in Fig. 3. The Self-Development Agent is a mod-ule built with intelligence. It not only receives information, butalso gives valuable suggestions that could help to improve thesystem, especially on media richness modalities. It works as a ‘‘re-search assistant” for system developers. The learning process isalso triggered by actions module and encapsulates new knowl-edge as behavior when monitored events have been repeated.Although both of the self-development and the learning agentshave the similar function that could help to improve the system,they are different types of agents with different focuses. TheSelf-Development Agent focuses on the functional developmentof the system itself, while the learning agent focuses on users’operating preferences on this system. Data gathered from learningprocess will be logged and sent to database for future reference.As shown in Fig. 3, there is a circulation among Learning, FindingProblems, Self-Development and Actions. It is because the Learningmodule gets the knowledge from Actions and this knowledgecould then in turn affects the Learning process; accordingly, theLearning process is not only affected by current Actions, butmay also by previous experiences stored in database. On thedesigners’ side, each end user keeps a synchronized database sys-tem that is maintained and updated in real-time through network.Therefore, each designer could share the sensed and learnt infor-mation with the other designers. After learning, the agent modulecan find problems automatically based on repeat events and passthem to self-development module, which will analyze thoseevents and generate a log for developer. As soon as the developerprovides a solution, the self-development module will pass it toaction module and the action module will adapt the improve-ment. After the Actions process, information is sent to each de-signer as feedback. At the mean while, the learning module alsoseeks for problems and sends the problems to self-developmentmodule. The self-development module could analyze those prob-lems and generate possible solutions and suggestions at the view-point of system developers, regarding to how could the system beimproved. System developers would see through those solutionsand suggestions, and then give feedback to the self-developmentmodule. If the system developer has confirmed with the solution,the solution will be executed and the system will get improved. Ifthe developer has sent negative feedbacks, the self-developmentmodule could either generate a new solution, or accept systemdeveloper’s solution. In the following sections, two case studies,which adopt the theoretical agent model, are described. In thefirst case study, it describes how a Mixed-Reality (VR) supportedcollaborative design system, namely MR-Collab, works in two dis-tribute design spaces; and in the second case study, it describeshow a Virtual Reality (VR) supported collaborative design system,namely VR-Collab, could support multiple designers in differentplaces to work together and improve the system itself at the meantime.

Fig. 4. Working scenario of MR-Collab from David’s side.

4. Case study 1: MR-Collab

This section introduces how the theoretical model could be ap-plied to a Mixed-Reality (VR) supported collaborative design sys-

tem, namely MR-Collab, and facilitate designers’ work in twodistribute design spaces.

4.1. System overview

For better understanding this computational model, a scenarioof using the MR-Collab system will be discussed in the context ofthis model: The two separate rooms are configured as two intelli-gent environments. This system not only functions as a collabora-tive platform for distributed users, but also as a self-developmentmodule which could learn behaviors from the users and generatepotential solutions and patterns for better design activities. Davidand Lily are two designers doing an interior design task aided bythis MR-Collab. They are located in different cities and want tomake interior decorations of a room together. Jack is one of thedevelopers of the MR-Collab system and wants to figure out thefunctions that need to be improved, added or abandoned. The mosteffective method is to get feedback by using this system in a pro-ductive design task. Therefore when David and Lily are doing thedesign task via MR-Collab, Jack will get some data for further sys-tem development.

Fig. 4 here demonstrates the working system effects from Da-vid’s side. He is wearing a head-mounted display (HMD), throughwhich he could see both virtual objects generated from markersand Lily’s virtual avatar. The effects from Lily’s side are similar ex-cept that the virtual avatar will be David. They could see eachother’s virtual avatar moving, waving or working on virtual objects.They could also verbally communicate through built-in micro-phones and headphones. Their communication and behaviors aremonitored and analyzed by the self-development module of thesystem. After analyzing these data, MR-Collab system will sendsuggested solutions back to Jack, which could help to solve prob-lems that are found during the operation on this system. There isa life-long learning process in this system as well. For instance, ifthe system recognizes that David always uses a specific style offurniture when designing a room, it will automatically identifiesthat pattern and begins with that style next time when David usesMR-Collab for similar design work.

4.2. Applying model to MR-Collab system

The model is applied to the MR-Collab system, described in pre-vious section. The application of the model to MR-Collab focuses onthe self-development and learning processes to examine how the

R. Wang et al. / Expert Systems with Applications 38 (2011) 1079–1088 1083

model helps users with design work and system developers withtheir improvement of the MR-Collab system. Fig. 5 shows the the-oretical model that applied to MR-Collab system.

There could be two distributed rooms with one designer in eachroom. Those rooms have sensors in both physical and virtual envi-ronments. Sensors in each room could not only capture states inthe local room, but also from the remote room through network.This model works with sensation, analysis and learning agents incollaborative environment. One extension is made to this model:the addition of Self-Development Agent and the ability to modelthe interactions between two intelligent rooms (instead of the tra-ditional human-environment interactions in one single environ-ment). The agent modules sensor interesting events from thephysical design environment and generates reasonable effects. Italso contains a search agent that could automatically get usefulinformation from Internet or database when needed. For instance,in MR-Collab, it could count the number of participants that wouldattend a conference meeting, and then makes decisions on thenumber of chairs to put in the conference room.

The size of each room is identical so that when real objects (e.g.walls) are mixed with virtual objects (e.g. desks), two designerswill sense the same effect. Otherwise conflict and collision mayhappen and design quality could be compromised. The initial pro-totype will be implemented in a room of 8 m (length) x 5 m(width) x 3 m (height). This would provide a fairly large area(40 m2) for the designer to fit into most design tasks such as livingrooms, bedrooms, kitchens, and bathrooms. In addition, this spaceof this room could be virtually segmented by virtual walls to matchthe actual design task if necessary. For example, when the size ofthe actual room is much smaller or it is L-shaped; a virtual wallcould be generated to reflect that as shown in Fig. 4.

The virtual objects library contains furniture samples made by3D modeling software Maya. They are saved in various cataloguesfor the easy access of designers.

These objects are mixed with real environment intermediatedby markers. Software package ARToolKit (Kato & Billinghurst, 2;Kato & Billinghurst, 1999) is utilized in this system to pop up vir-tual avatars and objects. Each marker contains unique patternsso that they could be easily identified from the video stream. Thesemarkers are just like barcodes or signatures, which could tell theinformation of size, position, and rotation of each marker. Whenattached to David’s arms, body, and legs, as demonstrated inFig. 4, his motion could be captured and roughly duplicated to

Fig. 5. Apply agent model

his remote 3D avatar in Lily’s environment. Likewise, the markeron the wall is fixed in order to give a central reference point tothe mixed environment so that two designers will have unified ef-fects. Other markers are manipulable to facilitate the design. Forinstance, the one on the floor could be moved and/or rotated byDavid to test the result of his choice of different beds. This couldyield better and more effective user experience than traditionalmouse/keyboard interface.

Illumination is another concern that needs to be taken care ofbefore using MR-Collab system. Calibration of lightening conditionis beneficial not only to the efficiency of design, but also to the per-formance of detecting the markers in ARToolKit. The actual calibra-tion could be achieved by either changing the physical room lightsor manipulating the HMD software.

At last, each room has a computer that analyses the data and ex-change information with the remote room on top of standard IPnetwork. Client/Server software architecture will be adopted inprototype. The one initializes the connection will be in server modeand the other one accepts the connection will be put in clientmode.

4.3. Working scenario

As in the scenario described in Section 1, David and Lily are twodesigners and each of them could have own preference of designand communication patterns. The system could monitor theirbehaviors by sensors installed in each room. The Sensation processcould process behaviors from both local space and remote spacethrough network. For instance, if Lily uses light green colour forwallpaper frequently, it is not only recognized at her local space,but also at David’s space. After the process of Sensation, the Moti-vated Agent gets the designers’ preferences. When the MotivatedAgent in this system has identified some interesting events as de-signer’s preference of design, for example, one specific design style,it triggers the Learning Agent and the latter records this preference.This information is stored in database and is synchronized withother MR-Collab users in real-time. It means once Designer David’sstyle has been identified by the system at his intelligent environ-ment, it is also learned/updated at the other remote intelligentenvironment where MR-Collab system is installed. Therefore, eachtime David starts a design work, the system offers his favouritestyle such as a set of preferred virtual objects. If David has usedsome virtual objects other than included in the previously stored

to MR-Collab system.

Fig. 6. Three designers working together in SecondLife (Linden Lab, 2003).

1084 R. Wang et al. / Expert Systems with Applications 38 (2011) 1079–1088

style, the Learning Agent could recognize whether it is one offchange or repetitive changes. The confirmation of repetitivechanges will then trigger the learning process and update the stylein the database. The Learning Agent could also learn from designexperiences of different environments. For instance, it could distin-guish a conference room from a bedroom.

If the Motivated Agent has categorized some events as commu-nication or co-presence behaviors, it triggers the Self-DevelopmentAgent, which could help with corresponding development of thissystem. Co-presence, also known as social presence, is a term usedin virtual reality or online learning. It is defined as consisting oftwo dimensions: co-presence as mode of being with others, andco-presence as sense of being with others. Mode of co-presence re-fers to the physical conditions that structure human interaction.Sense of co-presence, on the other hand, refers to the subjectiveexperience of being with others that an individual acquires ininteraction (Zhao, 2003). The Self-Development Agent reasonsabout data produced by both; physical sensors in the room suchas markers to monitor users’ interactions with the environment(mode of co-presence); and users’ experiences, for example, thecontent of verbal communication between users which reflectstheir feeling (sense of co-presence). It records the designers’ fre-quency of verbal communication, eye contact, gestures used to ex-press themselves and collaboration on the design work to assessthe level of co-presence. The Self-Development Agent could thenanalyze the data and generates a possible solution. For example,if the data gathered by the system shows that the designers weredoing the design work individually rather than collaborating, itmay ask designers questions for the reason. If it is due to the poorco-presence, for example, if David claims that the movement of Li-ly’s avatar is not natural and he does not feel like he is collaborat-ing with Lily, then a suggestion that Lily’s avatar should beimproved will be sent to the system developer, Jack.

5. Case study 2: VR-collab

This section describes how a Virtual Reality (VR) supported col-laborative design system, namely VR-Collab, could support multi-ple designers in different places to work together and improvethe system itself at the mean time.

5.1. System overview

The concept model could not only apply to two-room-basedsystems, but also apply to multi-location-based systems. It couldbe expanded by adding more end users (more rooms/locations).For better understanding of this concept, a scenario has beenestablished in one SecondLife platform which involves multipleend users. In this scenario, three designers located in distributedplaces are collaborating with the task of designing a virtual mu-seum in VR-Collab. It should be noted that the concept model isnot limited to only three designers; it could be as many as whatthe system could afford based on the capability of bandwidthand spaces on the server.

Andrea, Roy and Irene are three designers that are located atSydney, Melbourne and Gold Coast separately. Now they are col-laborating on a project of designing a fancy-looking museum. VR-Collab provides a virtual environment where the three designerscould work together. Each of them has a virtual human, namelyavatar in SecondLife platform, to represent themselves. In the vir-tual environment, each designer could see other designers’ avatarsmoving, talking, or working on some tasks in real-time. They couldalso express themselves through avatars’ body movements such aswaving the arms, jumping, nodding or shaking heads; they couldcommunicate with other designers’ avatars through verbal chat-

ting, instant messages and eye contacts. Fig. 6 shows a scenarioof three designers working together in VR-Collab environment:

All of the three designers share the same view in VR-Collab. Theycould navigate in the virtual world from both the first and thirdperson’s viewpoints. They create virtual objects in the virtual envi-ronment, and each object has a record of its ‘‘owner” (who createdthis object). Each single object could be added with intelligencefunctions by coding with LSL (Linden Scripting Language, used inscripts that give behavior to SecondLife objects); therefore, agentmodule could be enabled in VR-Collab, which automatically facili-tate the collaborative design. For instance, if Irene had worked inthe VR-Collab system individually for 2 hours and then left, whenRoy comes next time, the agent would tell Roy about this informa-tion. [which part of your computational model is devised to handlethis] Furthermore, as the design process goes on, the agent couldmonitor each designer”s working hours and give suggestions onthe time that they could work together.

5.2. Applying model to SecondLife system

The motivated learning agent model, as shown in Fig. 7, is ap-plied to the VR-Collab system. Unlike the MR-Collab system, theVR-Collab system does not require complicated setups on design-ers’ sides. SecondLife platform is already a maturely commercial-ized system; designers only need a computer, which hasSecondLife platform installed. The system developers have devel-oped the VR-Collab system in SecondLife. They script basic designtools with Agent functions, which are written in LSL. Those Agentfunctions follow the steps of sensation, analysis, learning and acti-vation; it also has the self-development module and could be con-nected with external database. However, the model is differentfrom the one used in MR-Collab system. Although the three design-ers are physically located in different spaces, they usually share thesame design environment and same Agents with in the environ-ment. Therefore, the model in VR-Collab system could be presentedas shown in Fig. 7.

Based on this theoretical model, designers in the real worldcould control their avatars in the virtual design environment. Thesensor in VR-Collab senses avatars’ performance and sends inter-esting events to the analyzer to compare and analyze. When itcomes to certain result, the result is passed to the learning moduleand it could learn about this event, especially if an event happensregularly. For instance, if the learning module finds out that two ofthe three designers usually work from 10 am to 3 pm, it would no-tify the other designer about this information and suggests him/her to work at the same time as well for better collaboration. If

Fig. 7. Applied agent model in SecondLife System.

R. Wang et al. / Expert Systems with Applications 38 (2011) 1079–1088 1085

the learning module has found some problems that may existwithin the system, for instance, lack of certain design function, itwill generate a problem statement as well as a conclusion, andthen send them to the system developers. If the developers agreewith the solution, it could start the self-development and theimprovement will effect the final actions that to be passed todesigners.

All design activities, including creating objects, editing anddeleting existing buildings, and any action of agent modules, takeplace on the server side. Because SecondLife platform has inter-faces with other software and developing tools such as html,php, and MySQL, external database could be connected to this sys-tem to facilitate the motivation, learning and self-developmentfunctions of the agent module. Next section will illustrate a work-ing scenario of this learning agent model with self-developmentmodule.

5.3. Working scenario

In the scenario described in the section of system overview, An-drea, Roy and Irene are now collaborating on the design of a cloth-ing shop using the VR-Collab system. Andrea controls her avatar,and makes the avatar talking to the avatar controlled by Roy. Theagent module in the virtual design environment could sense An-drea and Roy’s avatars’ performances, and pick up the interestingevents and pass to compare and analysis module. For instance,the sensors could monitor Andrea and Roy’s communication, andfind out that they have been talking for a while. It passes this issueto the analyzer, and the analysis module will then analyze theircommunication contents and find out that they were discussinghow to use the system toolbar. It passes this result to the learningmodule, and the learning module realizes that this issue has hap-pened several times based on its record. It sends this informationto the next module, and it concludes that the problem is probablycaused by the lack of user instruction. This conclusion is passed tothe self-development module, and it comes to the solution that ahelp menu should be added to the toolbar. Therefore, the solutionis sent to system developers in the physical design environment,and is confirmed by the developers. After that, the system wouldbe able to improve itself by generating a help menu and notifyusers about this update when next time the agent finds out thatusers are confused about the toolbar.

The VR-Collab system has been initially implemented in Second-Life environment. By attaching scripts on virtual buildings, furni-ture and objects, this system has the following two features:

– It senses communication channels. If the agent finds out thatdesigner A prefers text-based instant messaging (IM) butdesigner B and designer C use verbal chatting more often, it willsuggest the developer to add a ‘‘text-to-sound” (TTS) service,which transfers texts that designer A has entered to designerB and C in the form of voice; if it finds out that the designersare using different languages when sending IM, it will suggestthe developer to add a translation service.

– It considers media richness modalities in each client side andtries to improve the system from the information it sensed. Itcould get information about hardware and network conditions,and suggest a suitable media richness modality. For instance, ifthe bandwidth is narrow, it would suggest a light version withtext-based communication channel instead those may consumemuch bandwidth.

Fig. 8 shows the system framework of VR-Collab in SecondLife,where three or more designers are collaborating in a clothing shopdesign. Information about media richness is sensed by agent andsent to database via the shared virtual environment system. If anevent repeats, the agent can get this information from databaseand make it aware to the developer. The developer can thereforemake decisions on system media richness modalities.

Fig. 9 shows a screen shot of VR-Collab system. The agent findsout a designer does not have enough bandwidth to afford audiochatting, therefore it suggests using IM instead.

6. Experimental methodology

The aim of creating this collaborative motivated learning agentmodel is to better support collaborative work between designersthat are physically located in different places. Therefore, the sys-tems that adopt this model could be recognized as groupware forcooperative works. Evaluation methods that are currently used totest groupware systems could be used in the prototype presentedin this paper, such as collaboration usability analysis (CUA)(Pinelle, Gutwin, & Greenberg, 2004; Herskovic, Pino, Ochoa, &Antunes, 2007), groupware observational user testing (GOT)(Herskovic et al., 2007; Gutwin & Greenberg, 2000) protocol anal-ysis (PA) (Lees, Manton, & Triggs, 1999), and cooperation scenarios(COS) (Herskovic et al., 2007; Stiemerling & Cremers, 1998). In thissection, how these methods could be applied to evaluate theconceptual model will be illustrated. Each evaluation method hasits own strength that can be applied to analyse the users’ designand interaction in the systems.

Fig. 8. VR-Collab system in SecondLife.

Fig. 9. Screenshot of VR-Collab system.

1086 R. Wang et al. / Expert Systems with Applications 38 (2011) 1079–1088

6.1. Collaboration usability analysis (CUA)

CUA is a task analysis technique designed to represent collab-oration in shared tasks for the purpose of carrying out usabilityevaluations of groupware systems, and is focused on the team-work aspects of a collaborative situation (Pinelle et al., 2004).Using this method, the collaborative aspects of the model couldbe tested. Experiment could be set up as the following:

There could be two sets of collaborative design systems for re-mote designers. One of them adopts the collaborative learningagent model, and the other is without the agent model. All otheraspects of the two systems are similar to each other. Focus groupscould be invited and assigned to some simple design tasks. The twodesigners will be separately located into two different rooms to en-able remote collaboration. Only remote communication channelscould be enabled such as instant text messages, verbal chattingthrough internet or video conferencing. Participants should com-plete the same design task within both systems; after that, theywill be required to fill out questionnaires based on their design

experience. Results are collected from the experiments in both sys-tems and will be analyzed to compare the usability in several as-pects, such as:

– How well do the participants completed the design tasks;– How much time were spent for each task;– How well did the participants communicate with each other;

was the communication efficiency?

Since CUA is a kind of inquiry methods that gather data fromparticipants, thus reflecting their subjective opinions, otherevaluation methods could be adopted as well to assist CUA, suchas groupware observational user testing (GOT), protocol analysis(PA), and cooperation scenarios (COS).

6.2. Groupware observational user testing (GOT)

GOT is a technique based on the observational user testingmethod. It involves evaluators observing how users performparticular tasks supported by a system in a laboratory setting(Gutwin & Greenberg, 2000). This method could be adoptedwhen conducting the experimentation described in the previoussection. During the period that the participants are working onthe collaborative design task, the researchers could monitor theparticipants to see what problems they would get with the tasks,or ask them to think about what they are doing to gain some in-sights on the current tasks. By identify the problems and partic-ipants’ thoughts by observing their performance during theexperiment, the conceptual model could be better evaluated withthe qualitative data collected from GOT. GOT demands usabilityexperts who have a deep understanding of the systems andusers’ behaviours. While conducting usability tests, they mayrecord what are being observed, and then analyse them basedon a set of guidelines. A lot of time needs to be spent in inter-preting data captured for work settings. The evaluation of quali-tative data can be supplemented with other methods that makeuse of quantitative data.

R. Wang et al. / Expert Systems with Applications 38 (2011) 1079–1088 1087

6.3. Protocol analysis (PA)

Protocol analysis is a rigorous methodology for eliciting verbalreports of thought sequences as a valid source of data on thinking(Lees et al., 1999). During the participants working on the designtask described in the CUA session, their communication, includingverbal chatting through internet and instant text messages will berecorded. After completing the design tasks, all the sentences willbe put into different categories based on their contents. The cate-gories identified for this experimentation include:

– Communication on how to use the system;– Communication on how to work on the design task

collaboratively;– Communication on creating new design ideas;– Communication on repeating ideas;– Personal communication;– Communication that has no meaning;– Etc.

By coding the communication contents into different categoriesand further analyze them, the efficiency of the communication as-pect of the conceptual model could be evaluated.

Experiment design using PA could be set up as the following:A number of participants will be gathered and divided into

several groups with three people in each group. The participantswill be trained how to use VR-Collab system. Each group will beassigned to a collaborative design task at one time. The task is todesign a virtual clothing shop together, which includes virtualbuilding, furniture, decoration and clothes. Half of the groups areassigned to normal SecondLife developing conditions and the otherhalf will be experiencing VR-Collab condition which includes self-developing agent module. The design processes will be recorded.After each group finishes their design, their communication willbe coded. A percentage of each category will be calculated.One-way ANOVA is to be used to compare the efficiency of eachworking environment. Since there is no direct measure of design-ers’ thinking processes, many researchers have adopted the proto-col analysis method in order to explore and understand howdesigners design and communicate while working collaborativelyon design tasks (Foreman & Gillett, 1997; Gero & Tang, 2001).

6.4. Cooperation scenarios (COS)

The COS method aims to capture users’ work and its context(Stiemerling & Cremers, 1998). In order to use this method to testthe conceptual model, researchers will conduct field studies,semi-structured interviews, and workspace observations. Theexperiment setup described in the previous sections could adoptthe COS method as an assistant. Each participant involved in theexperiment could be assigned a role, for instance, leading designer,assistant designer, project manager, customer, etc. They shouldwork on the design tasks based on their roles. Through the activi-ties in COS, the participants’ cooperative behaviour, users’ involve-ment and their roles and relevant context could be identified. Foreach role that is involved in the experimentation, researchers couldanalyse how the new conceptual model could change and benefitthe design process.

7. Summary and future work

In this paper, a theoretical model of intelligent learning agentwith self-development function for a collaborative system was de-scribed. The aim of generating this model is to make better use ofcollaborative systems. Mixed-Reality mediated collaborative de-

sign system – MR-Collab, and Virtual-Reality mediated collabora-tive system VR-Collab was briefly introduced as the scenario totest the theoretical model. However, this model is not limited tothis particular system. It could also be applied to other collabora-tive systems.

The main contribution of this paper is that it provided a newconcept: in agent-based models, it should not be limited in sensingand learning in the collaborative working process only, but also theproblems, issues and potential improvement of the system itself.This paper provides a conceptual framework as well as potentialsystems that could adopt this framework. It also suggests potentialexperiment methodologies. However, due to time limitation, sys-tems described in this paper have not be fully implemented yet.The Self-Development Agent discussed in this paper will be im-proved and more specific aspects of this agent will be workedout in future work. Also, more studies on the links between distrib-uted paces in the model are needed. To further develop and testthis model, the proposed MR-Collab system and VR-Collab systemwill be technically implemented to realize those agents and corre-sponding experiments are to be designed and implemented, as dis-cussed in previous section.

References

Axtell, R. L., Andrews, C. J., & Small, M. J. (2003). Agent-based models of industrialecosystems. Vol. 2009.

Churchill, E. F., & Bly, S. (1999). It’s all in the words: Supporting work activites withlightweight tools. Proceedings of the International ACM SIGGROUP Conference onSupporting Group Work Phoenix. Arizona, United States: ACM.

Cook, D. J., Youngblood, M., Edwin, I., Heierman, O., Gopalratnam, K., Rao. et al.(2003) MavHome: An agent-based smart home. In First IEEE internationalconference on pervasive computing and communications (PerCom’03) (pp. 521–524).

Daft, R. L., & Lengel, R. H. (1986). Organizational information requirements, mediarichness and structural design. Management Science, 32, 554–571.

Foreman, N., & Gillett, R. (1997). Handbook of spatial research paradigms andmethodologies. Hove, UK: Psychology Press.

Gero, J. S., & Tang, H. H. (2001). Differences between retrospective and concurrentprotocols in revealing the process-oriented aspects of the design process. DesignStudies, 21(3), 283–295.

Gutwin, C., & Greenberg, S. (2000). The mechanics of collaboration: Developing lowcost usability evaluation methods for shared workspaces. Proceedings of thenineth IEEE International Workshops on Enabling Technologies: Infrastructure forCollaborative Enterprises. IEEE Computer Society.

Herskovic, V., Pino, J. A., Ochoa, S. F., & Antunes, P. (2007). Evaluation methods forgroupware systems. Groupware: Design, Implementation, and Use (Vol. 4715/2007, pp. 328–336). Berlin/Heidelberg: Springer.

Johanson, B., Fox, A., & Winograd, T. (2002). The interactive workspaces project:Experiences with ubiquitous computing rooms. Pervasive Computing (Vol. 1,pp. 67–74). IEEE.

Kato, I. P. H., & Billinghurst, M. (2000). ARToolkit User Manual (2.33 ed.).Kato, H., & Billinghurst, M. (1999). Marker tracking and HMD calibration for a video-

based augmented reality conferencing system. Proceedings of the Second IEEEand ACM International Workshop on Augmented Reality. IEEE Computer Society.

Lees, C., Manton, J., & Triggs, T. (1999). Protocol Analysis as a Tool in Function and TaskAnalysis. Salisbury, South Australia: DSTO Electronics and Surveillance ResearchLaboratory.

Linden Lab, Second Life, 2003.Maher, M. L., Merrick, K., & Saunders, R. (2007). From passive to proactive design

elements: Incorporating curious agents into intelligent rooms. In Computer-aided architectural design futures (CAADFutures) (pp. 447–460).

Maher, M. L., Liew, P.-S., Gu, N., & Ding, L. (2005). An agent approach to supportingcollaborative design in 3D virtual worlds. Automation in Construction, 14, 7.

Moore, R. J., Ducheneaut, N., & Nickell, E. (2006). Doing virtually nothing: Awarenessand accountability in massively multiplayer online worlds. In ComputerSupported Cooperative Work (CSCW), Vol. 16 (pp. 265–305).

Pinelle, D., Gutwin, C., & Greenberg, S. (2004). Collaboration usability analysis: Taskanalysis for groupware usability evaluations, interactions. Vol. 11 (pp. 7–8).

Rabiner, L., & Juang, B. (1986). An introduction to hidden Markov models. ASSPmagazine (Vol. 3, pp. 13). IEEE.

Russell, S. J., & Norvig, P. (2002). Artificial intelligence: A modern approach. PrenticeHall.

Shneiderman, B., & Plaisant, C. (2004). Designing the user interface: Strategies foreffective human-computer interaction (4th ed.). Pearson Addison Wesley.

Short, J. A., Williams, E., & Christie, B. (1976). The social psychology oftelecommunications. London: Wiley.

Stiemerling, O., & Cremers, A. B. (1998). The use of cooperation scenarios in thedesign and evaluation of a CSCW system. IEEE Transactions on SoftwareEngineering, 24, 1171–1181.

1088 R. Wang et al. / Expert Systems with Applications 38 (2011) 1079–1088

Tay, F. E. H., & Roy, A. (2003). CyberCAD: A collaborative approach in 3D-CADtechnology in a multimedia-supported environment. Computers in Industries, 52,127–145.

Wadley, G., Gibbs, M. R., & Ducheneaut, N. (2009). You can be too rich: Mediatedcommunication in a virtual world. In OZCHI 2009 Melbourne, Australia.

Wang, T.-W., & Tadisina, S. K. (2007). Simulating Internet-based collaboration: Acost-benefit case study using a multi-agent model. Decision Support System, 43,645–662.

Zhao, S. (2003). Toward a Texonomy of copresence, presence: Teleoperators andvirtual environment. Vol. 12 (p. 11).