13
The importance of participant interaction in online environments J.B. Arbaugh a,1 , Raquel Benbunan-Fich b, a College of Business Administration, University of Wisconsin Oshkosh, 800 Algoma Blvd. Oshkosh, WI 54901, USA b Zicklin School of Business, Box B11-220, Baruch College, CUNY New York, NY 10010, USA Received 14 August 2006; received in revised form 28 December 2006; accepted 31 December 2006 Available online 12 January 2007 Abstract An emerging body of research suggests that participant interaction is one of the strongest predictors of success in online environments. However, studies about the effects of participant interaction in a large sample of multiple online environments are rather limited. Using hierarchical modeling techniques, we examine a sample of 40 online MBA courses to determine whether learnerinstructor, learnerlearner, or learnersystem interaction is most significantly related to online course outcomes. Our findings suggest that while collaborative environments were associated with higher levels of learnerlearner and learnersystem interaction, only learnerinstructor and learnersystem interaction were significantly associated with increased perceived learning. © 2007 Elsevier B.V. All rights reserved. Keywords: Participant interaction; Online learning; Hierarchical ANOVA; Perceived learning 1. Introduction After reviews of numerous studies have concluded that at worst there is no significant difference in out- comes between online and classroom-based courses [12,50], a finding consistent with reviews of the empi- rical Group Support Systems literature [18], probably one of the most definitive research findings to date re- garding the effectiveness of online learning is the im- portance of participant interaction [5,45]. An increasing number of studies suggest that participant engagement, whether it is between participants and/or between parti- cipants and the instructor, is one of the strongest pre- dictors of positive outcomes in online educational environments [14,15,33,44]. However, as literature moves from mere comparisons of results in traditional vs. online environments to considering additional inter- vening variables [12,50], the area of participant inter- action deserves closer attention. In addition to measuring the relationship between types of interaction and out- comes, there is a need to understand the nature of these interactions and potential influences on them [21,43]. Relatively little research attention has been devoted to examine the nature of interaction across a large sam- ple of participants drawn from different online environ- ments, in part because of the relative newness of the research stream and the exploratory nature of initial online settings. As this area matures from the research and practice standpoint, there is an opportunity to systematically examine different types of interaction in multiple online environments, taking into account the design of these virtual spaces and their outcomes [45,49]. Research that determines strong levels of fit between modes of participant interaction and other Decision Support Systems 43 (2007) 853 865 www.elsevier.com/locate/dss Corresponding author. Tel.: +1 646 312 3375; fax: +1 646 312 3351. E-mail addresses: [email protected] (J.B. Arbaugh), [email protected] (R. Benbunan-Fich). 1 Tel.: +1 920 424 7189; fax: +1 920 424 7413. 0167-9236/$ - see front matter © 2007 Elsevier B.V. All rights reserved. doi:10.1016/j.dss.2006.12.013

The importance of participant interaction in online environments

Embed Size (px)

Citation preview

43 (2007) 853–865www.elsevier.com/locate/dss

Decision Support Systems

The importance of participant interaction in online environments

J.B. Arbaugh a,1, Raquel Benbunan-Fich b,⁎

a College of Business Administration, University of Wisconsin Oshkosh, 800 Algoma Blvd. Oshkosh, WI 54901, USAb Zicklin School of Business, Box B11-220, Baruch College, CUNY New York, NY 10010, USA

Received 14 August 2006; received in revised form 28 December 2006; accepted 31 December 2006Available online 12 January 2007

Abstract

An emerging body of research suggests that participant interaction is one of the strongest predictors of success in onlineenvironments. However, studies about the effects of participant interaction in a large sample of multiple online environments arerather limited. Using hierarchical modeling techniques, we examine a sample of 40 online MBA courses to determine whetherlearner–instructor, learner–learner, or learner–system interaction is most significantly related to online course outcomes. Ourfindings suggest that while collaborative environments were associated with higher levels of learner–learner and learner–systeminteraction, only learner–instructor and learner–system interaction were significantly associated with increased perceived learning.© 2007 Elsevier B.V. All rights reserved.

Keywords: Participant interaction; Online learning; Hierarchical ANOVA; Perceived learning

1. Introduction

After reviews of numerous studies have concludedthat at worst there is no significant difference in out-comes between online and classroom-based courses[12,50], a finding consistent with reviews of the empi-rical Group Support Systems literature [18], probablyone of the most definitive research findings to date re-garding the effectiveness of online learning is the im-portance of participant interaction [5,45]. An increasingnumber of studies suggest that participant engagement,whether it is between participants and/or between parti-cipants and the instructor, is one of the strongest pre-dictors of positive outcomes in online educational

⁎ Corresponding author. Tel.: +1 646 312 3375; fax: +1 646 3123351.

E-mail addresses: [email protected] (J.B. Arbaugh),[email protected] (R. Benbunan-Fich).1 Tel.: +1 920 424 7189; fax: +1 920 424 7413.

0167-9236/$ - see front matter © 2007 Elsevier B.V. All rights reserved.doi:10.1016/j.dss.2006.12.013

environments [14,15,33,44]. However, as literaturemoves from mere comparisons of results in traditionalvs. online environments to considering additional inter-vening variables [12,50], the area of participant inter-action deserves closer attention. In addition to measuringthe relationship between types of interaction and out-comes, there is a need to understand the nature of theseinteractions and potential influences on them [21,43].

Relatively little research attention has been devotedto examine the nature of interaction across a large sam-ple of participants drawn from different online environ-ments, in part because of the relative newness of theresearch stream and the exploratory nature of initialonline settings. As this area matures from the researchand practice standpoint, there is an opportunity tosystematically examine different types of interaction inmultiple online environments, taking into account thedesign of these virtual spaces and their outcomes[45,49]. Research that determines strong levels of fitbetween modes of participant interaction and other

854 J.B. Arbaugh, R. Benbunan-Fich / Decision Support Systems 43 (2007) 853–865

characteristics would be most useful for encouragingsuccessful online instruction and training.

This paper seeks to clarify the relationship betweentypes of participant interaction, considering both thedesign of the online environments and their outcomes.By studying these relationships, we hope to be able toprovide guidance regarding the types of interaction thatare more appropriate and conducive to successful out-comes. The paper begins with a review of the literatureon types of online environments and types of participantinteraction. We then use this background to formulatethe research hypotheses guiding this study. The nextsection describes the data collection techniques andoperationalization of variables for a sample collectedover seven semesters at a university in the MidwesternU.S. The presentation of results is followed by a dis-cussion of the findings, limitations, and implications ofthis research. Finally, we conclude by discussing thestudy's contributions and future research directions.

2. Literature review

2.1. Typology of online environments

Computer-based learning environments provideopportunities for online learners to learn at their chosentime and location while allowing them to interact withother online learners and access a wide range of onlineresources [49]. Depending on how learners receive thematerials and interact with others, online virtual spacesdesigned for education and training can be classified interms of two dimensions: knowledge constructionand group collaboration, with each one further sub-divided into two categories. Knowledge constructionconsists of objectivist vs. constructivist approaches,and collaboration is separated into individual vs. groupwork [8].

Regarding knowledge construction, the objectivistmodel consists of transferring knowledge from the pro-fessor to the participants and allowing each of them to

Fig. 1. Typology of onl

learn independently. The main assumption in thisapproach is that there is a unique and objective bodyof knowledge representing the world that can bearticulated and directly communicated to the studentsby the professor/instructor. In contrast, the constructivistmodel assumes that knowledge is created or constructedby every learner. Instead of an external objective reality,the mind produces its own conception [28]. In bothmodes, instruction can be imparted by a person (pro-fessor) or automatically delivered by the system throughcomputer-based tutoring, as described by Siemer andAngelides [42]. Our focus here is on environments de-signed and delivered by human instructors as opposed toautomatic tutoring systems.

Group collaboration represents the extent to whichparticipants learn in isolation or through interactionswith their peers. Collaborative activities allow learnersgreater opportunities for increased social presence and agreater sense of online community, both of which havebeen associated with positive online course outcomes[22,38]. When working with peers instead of alone (orjust with the instructor), anxiety and uncertainty arereduced as learners communicate with their teammatesand find their way together through complex or newtasks [25].

The combination of knowledge construction with thepresence of group collaboration produces a fourfoldtypology: transfer–individual, transfer–group, construc-tion–individual and construction–group. These catego-ries describe four possible web-based learningenvironments. Fig. 1 depicts this classification.

In a Transfer–Individual environment there is adirect knowledge transfer from the source (i.e. instruc-tor) to the participants. Participants in these settings arerequired to learn and master the material individuallyand demonstrate their assimilation of preordainedknowledge by working individually in the completionof assignments or exams [29].

In aConstruction–Individual environment, participantscreate/construct their own knowledge independently, by

ine environments.

855J.B. Arbaugh, R. Benbunan-Fich / Decision Support Systems 43 (2007) 853–865

actively interacting with the subject matter. Instead ofreceiving modules of pre-ordained knowledge (as intransfer courses), participants gather materials andcombine information from different sources. This processof investigation and discovery, usually presented in theform of a complex project or case study, allows the studentto understand the subject matter and create knowledge forhim/herself [30].

A Construction–Group environment is also based onthe premise that students search for meanings instead ofreceiving packaged lectures. However, instead ofcarrying out individual investigations, participants inthis type of course work together in small groups todiscuss cases and/or to collaborate on projects. This kindof environment requires students to interact with othersin the knowledge construction process [29,30].

Finally, a Transfer–Group environment is alsobased on a one-way transmission of knowledge fromthe instructor to the participants and requires theseparticipants to work together and to co-operate in thecompletion of group projects and other group-basedassignments. The mastery of the material is accom-plished not only by individually assimilating content,but also by working in group assignments that reinforcethe learning process. The main difference betweentransfer–group and construction–group is that partici-pants are not required to create new meaning or buildknowledge in order to solve the problem, but ratherreproduce existing concepts, principles or methods. Indoing so, students primarily learn with others and ex-perience some of the cognitive benefits of group work.

2.2. Participant interaction

An asynchronous environment frees users from tem-poral and geographical constraints allowing people toreflect more about their own contributions [10,17]. Thedownside of being removed from others in time and spaceis an increase in feelings of isolation. To counteract thisdisadvantage, online learning environments should affordopportunities for meaningful instructional interactionsamong participants and between participants and objectsin their environment. According to Wagner [47], in-structional interactions are reciprocal events taking placebetween a learner and the learner's environment with thepurpose of changing learners andmoving them toward theachievement of their goals. Depending upon the interact-ing parties, there are three types of interaction: learner–instructor, learner–learner and learner–content [34].While these types of interactions are present in everycourse, whether traditional or online, they are particularlycritical in asynchronous virtual learning environments.

Learner–instructor interaction measures the level ofinvolvement of the instructor with the students and theextent to which they experience the proximity of theinstructor through his/her online presence. Dependingon the instructional approach, the instructor can take aprominent role (instructor centered) or a facilitator role.For example, an objectivist mode of instruction, whichis based on the transfer of information from the lecturerto the learner, emphasizes learner–instructor interactionover the other types [31].

Learner–learner interaction refers to the exchangesamong students enrolled in the course. Collaborativemodels of learning are based on the notion that learningis most successful when small groups of students shareand discuss information. Interaction with peers pro-vides participants with the synergy and motivation toexcel [9,45]. Through task-oriented and socio-emo-tional interactions, online participants obtain the re-sources and support necessary to succeed in thisenvironment [18].

Learner–content is the interaction between thelearner and the material to be learned, which can bepresented in different formats such as text, audio, video,graphs and images. The connection between the learnerand the material is influenced by the nature of subjectmatter (factual vs. procedural or quantitative vs. qual-itative content) and the design of the online environ-ment. For example, in constructivist courses, studentsshould actively construct their own knowledge throughintensive engagement with multiple sources of infor-mation, whereas in objectivist courses based on lecturesand textbooks, the students are mostly expected torecall the material as presented. Consequently, techno-logical support for objectivist approaches of instructionis focused on learner–content and learner–instructorinteraction, while technological platforms for construc-tivism must provide access to content in a non-linear ornon-structured way, and learning tools such as data-bases, conceptual models, simulations and hypermedia[7,31].

In pure online environments, participants must usespecific technologies, platforms, applications and tem-plates to interact with instructors, other participants andcontent [49]. Accordingly, Hillman et al. [23] proposeanother type of interaction to account for the interfacebetween the learner and the system. The nature of thislearner–interface or learner–system interaction is criti-cal to assess the dynamics of the online learning ex-perience. Other studies have shown the importanceof interface design in the successful performance ofcomputer-supported groups [41]. Although course-ware systems are increasingly becoming easier to use,

856 J.B. Arbaugh, R. Benbunan-Fich / Decision Support Systems 43 (2007) 853–865

learner–system interaction facilitates or constrainsthe quantity and quality of the other three types ofinteractions [44].

An understanding of the different types of interac-tions has important implications for online environ-ments. Numerous conceptual works and empiricalstudies have emphasized the importance of interactionto achieve positive outcomes in online courses [44,45].However, prior research has mostly been focused on aparticular kind of interaction and its effects. Someresearchers have examined interaction with the instruc-tor as a predictor of success [32,33,43], while othershave investigated the role of student–student interactionin learning and/or satisfaction outcomes [2,37], and stillothers look at interaction with the technology as akey factor in online courses [23,29]. Very few of thesestudies have examined the relations between differenttypes of online environments and technology-mediatedinteractions and their influence on outcomes simulta-neously [12].

2.3. Hypotheses development

In online objectivist environments, participants areexposed to frequent instructional interactions enactedby the instructor, who is in control of the material andpace of learning [28,31]. Accordingly, participantsare mostly passive recipients of content delivered tothem via PowerPoint presentations, posted lecturematerials, or streaming video and audio lectures [7].Because of the central role of the instructor as the“content expert” in this type of environment, partici-pants are more likely to contact the instructor withquestions or concerns than to contact other participants[27]. Therefore,

H1. Participants in online objectivist environments willreport higher levels of learner–instructor interactionthan learner–learner interaction.

Collaborative approaches leverage the value oflearner–learner interaction, not only for learningpurposes but also for developing a sense of learningcommunity in online courses, lacking face-to-facecommunication [38]. Online collaborative environ-ments use synchronous and asynchronous computer-mediated communication systems such as chat roomsand discussion forums to support interactions amongparticipants organized in small groups [7]. The successof technology-supported collaborative settings dependsto a great extent on the effectiveness of these groups.Problems such as lack of commitment by individuals,difficulty in developing a trust among team members,

insufficient knowledge about the activities and coordi-nation problems may reduce the potential learningadvantages of working in groups [25]. Despite thesepotential disadvantages, adding a social dimension tothe activities that take place in online environments isparticularly important, because the sense of communityand the typical cues of face-to-face interactions areabsent in these settings [37]. The intensity of commu-nication among learners takes precedence over othertypes of interaction in online collaborative environ-ments. Thus,

H2. Participants in collaborative environments willreport higher levels of learner–learner interaction thanlearner–instructor interaction.

Collaborative environments are designed aroundconstant and frequent interactions among participants[1,20]. These exchanges can occur in public discussionforums, opened to all the students enrolled in a class, orin smaller group sessions, where only members of thesame team can participate [25]. Since these interactionsare mediated by the technology [23], participants inthese collaborative environments are expected to reporthigher levels learner–system interaction, than those inindividually oriented settings. Therefore,

H3. Participants in collaborative online environmentswill report higher levels of learner–system interactionthan students in individually oriented courses.

Due to the cognitive benefits of working with peersin small groups whereby ‘learning from others’ mech-anisms are activated [9], participants who interact morewith their counterparts will report higher levels oflearning than those who do not. Several prior studiesalso document the positive role of learner–learner inter-action in student learning [40,45]. In fact, the engage-ment of multiple participants in inquiry and discoursehas been shown to be a key element to improve thelearning experience in online courses [20]. Therefore,

H4a. Learner–learner interaction will be positivelyrelated to self-reported learning.

In addition to the learning benefits associated withworking with fellow classmates, interactions amongparticipants have socio-emotional benefits, as learnersfind their ways together through complex tasks andexperienced less isolation and remoteness, which aretypical in online courses [9,18,37]. Because of thesesocial benefits that create a sense of learning communityin totally online environments lacking face-to-facemeetings, participants who experience higher levels oflearner–learner interaction are more satisfied with the

857J.B. Arbaugh, R. Benbunan-Fich / Decision Support Systems 43 (2007) 853–865

online environment [2,38]. Based on this argument, wehypothesize:

H4b. Participants who experience higher levels ofinteraction with other participants will be more satisfiedwith the online medium.

In online educational environments, participants areexposed to frequent instructional interactions enacted bythe instructor. Depending on the knowledge delivery ap-proach, the instructor plays either a central role by being incontrol of the material and pace of learning or a facilitatorrole as an enabler of the participants' learning [20,28,39].In both cases, the online presence of the instructor iscrucial to ensure the success of online environments[6,15,32]. Instructors who are engaged with their virtualsettings are able to provide a better learning experience fortheir students by answering their questions and concernsin a timely fashion [39]. Therefore, we hypothesize:

H5a. Learner–instructor interaction will be positivelyrelated to self-reported learning.

A strong online presence of the instructor results insocio-emotional benefits as well, as instructors are ableto reduce the uncertainty traditionally associated withpure online environments [11,15]. In fact, it has beenfound that participants who report higher levels ofinteraction with the instructor are more satisfied withonline learning [33,44,45]. Thus,

H5b. Participants who experience higher levels ofinteraction with the instructor will be more satisfiedwith the online medium.

Since in pure online environments interactions withpeers and instructors occur through the system, those whoare active participants will use the system more inten-sively than other more passive members [27,38,44]. Byvirtue of this engagement, these active participants aretypically thosewho both learnmore and aremore satisfiedwith the online medium. Therefore, we hypothesize:

H6a. Learner–system interaction will be positivelyrelated to self-reported learning.

H6b. Participants who experience higher levels ofinteraction with the system will be more satisfied withthe medium.

3. Methods

3.1. Sample and data collection

To test these hypotheses, we used a sample of fortyclass sections delivered entirely online, from the MBA

program of an upper-Midwest U.S. University, betweenSummer 2000 and Summer 2002. These sections in-cluded a wide range of courses (Strategy, OrganizationalBehavior, Project Management, International Business,Human Resources, Finance, Accounting, Management,Information Systems, and E-Commerce) taught byfifteen different instructors. Enrollments in these sec-tions ranged from 9 to 35 students.

These 40 course sections were classified based onthe approaches used to deliver the material (objectivistvs. constructivist) and to structure the course activities(individual vs. collaborative), according to the fourfoldtypology presented above. To this end, we conductedsemi-structured interviews with the instructors. Facultymembers were asked whether their courses were basedprimarily in fact/concept dissemination via online lec-tures, or based on knowledge construction by students.In addition, faculty members were also asked to com-pare the proportion of individual and group learningactivities in their courses. Overall, the faculty's re-sponses were consistent with secondary sources ofinformation such as course's syllabus and web site. Forexample, none of the objectivist–individual coursesused team projects or group discussions, while theconstructivist–group sections used either group pro-jects or peer evaluation techniques.

As a result of the information acquired from ourinterviews and analysis, 3 courses (4 sections) werecategorized as objectivist–individual, 6 courses (14sections) were categorized as objectivist–group, 9courses (9 sections) were categorized as the constructiv-ist–individual and 8 courses (13 sections) were cat-egorized as constructivist–group. When the teaching andlearning models were different, two sections of the samecourse were classified in different cells of the framework.Table 1 details the courses in the study by instructor,content, and teaching approach classification. It isnoteworthy that some instructors teach in more thanonemode. The presence of the same instructor in differentcells alleviates concerns of potential confounds betweenthe knowledge delivery approach and the instructor as asource of differences in the dependent variables.

Data collection from students was completed in a two-step process. Students completed a survey either in classfor courses that had a final physical meeting or via e-mailfor those that did not. In the second step, non-respondingstudents were mailed a copy of the survey. The usableresponse rate was 66% (579 of 872). A comparison ofstudent age and gender distribution showed no significantdifferences between respondents and non-respondents,suggesting the sample is not subject to non-response bias.Table 1 also shows the response rate by course section.

Table 1Courses, delivery modes, instructors, respondents, and enrollments

Course Type a Instructor No. of class sections Course(s) response rate

Foundations of Finance O/I A 2 51.3% (20 of 39)Internat'l Financial Mgt. and Inv. O/I B 1 71.4% (10 of 14)Managerial Accounting/Cost Mgt. O/G C 2 73.1% (38 of 52)Mgt. Information Systems Integration C/G D 1 71% (22 of 31)Mgt. Info. Syst. Analysis and Design O/I D 1 37.5% (6 of 16)Managerial Problem Solving C/I E 1 95.2% (20 of 21)Human Resources Management 1 O/G F 2 71.7% (43 of 60)

1 C/GHuman Resources Management C/I G 1 59.1% (13 of 22)International Business O/G H 4 63.3% (62 of 98)Organizational Foundations O/G I 3 52.6% (30 of 57)Organizational Leadership and Change C/I N 1 94.4% (17 of 18)Organizational Leadership and Change C/G J 1 60.7% (17 of 28)Personal and Professional Development 1 C/G I 2 80.8% (42 of 52)

1 C/IStrategic Management 1 O/G K 3 69.6% (48 of 69)

2 C/GMarketing Strategy C/I L 1 53.8% (14 of 26)E-Commerce to E-Business C/I L 1 34.8% (8 of 23)Classic and Cont. Literature in Bus. C/G I 1 50% (6 of 12)Classic and Cont. Literature in Bus. C/G M 2 72.5%(37 of 51)Planning for Mgt. in the Future C/G J 2 69.4% (34 of 49)Environmental Management C/I M 1 68.8% (11 of 16)Introduction to Project Mgt. O/G I 3 58.2% (46 of 79)Advanced Topics in Project Mgt. C/G I 2 52% (13 of 25)Project Execution and Control C/I I 1 66.7% (8 of 12)Business Environments: Law, Regulation, and Ethics C/I O 1 63.6% (14 of 22)a Type: O/I: Objectivist/Individual; C/I: Constructivist/Individual; O/G: Objectivist/Group; C/G: Constructivist/Group.

858 J.B. Arbaugh, R. Benbunan-Fich / Decision Support Systems 43 (2007) 853–865

3.2. Measures

The classification framework is defined by theknowledge delivery approach (objectivist vs. con-structivist) and the collaborative dimension (individ-ual vs. group). These two dimensions are measuredwith respective categorical variables: 0 for objectiv-ism and 1 for constructivism, and 0 for individual and1 for groups. The indicators for each course sec-tion were determined from the instructors' interviewresponses.

The dependent variables used in this study wereperceived learning and satisfaction with the onlinemedium. The perceived learning scale was originallydeveloped by Hiltz [24] and Alavi [1]. The scale tomeasure satisfaction with the medium was modifiedfrom existing satisfaction scales and used by Arbaugh[2]. Several items were included to study the nature ofthe interaction in the courses: learner–instructorinteraction, learner–learner interaction and learner–system interaction. Participant perceptions of learner–instructor interaction were measured using seven itemsadapted from instruments used by Sherry et al. [40].Participant perceptions of learner–learner interaction

were measured using four items from the instrumentused by Sherry et al. [40]. Learner–system interactionwas measured using four items based on the construct ofHillman et al. [23].

The control variables used were age and gender, skilllevel in web-based courses, attitudes toward coursesoftware, student prior experience with web-basedcourses, effort, and flexibility. Skill level was measuredusing a three-item scale of students' skills in usingcomputers, computer keyboards, and the Internet.Attitude toward the technology was measured using athree-item scale adapted from the study of Thompsonet al. [46]. Effort was calculated by the self-reportednumber of days a week the student was logged onto thecourse site multiplied by the average number of minutesa day they were logged on. We measured perceivedflexibility of the course using Arbaugh's [2] six items.

4. Results

4.1. Factor and reliability analysis

To study the properties of the scales used in thisstudy, we conducted a Harman's one-factor test using

Table 2Results of factor analysis

Factor 1 Factor 2 Factor 3 Factor 4 Factor 5 Factor 6 Factor 7 Factor 8

Learner–InstructorLII Item 1 .24 .80 .06 .04 .12 − .03 .07 .09LII Item 2 .24 .78 .03 .19 .09 − .03 .06 − .01LII Item 3 .29 .78 .06 .25 .13 − .01 .03 .01LII Item 4 .23 .76 .03 .20 .15 − .04 .05 .11LII Item 5 .16 .74 .15 .03 − .06 .13 .02 .15LII Item 6 .21 .71 .10 .22 .00 .05 .06 .19LII Item 7 .39 .64 .05 .33 .17 − .11 .04 .11

Learner–LearnerLLI Item 1 .11 .17 .01 .79 .11 − .02 .08 .01LLI Item 2 .15 .15 .07 .78 .11 .02 .05 .05LLI Item 3 .16 .28 .05 .71 .10 − .04 .13 .32LLI Item 4 .18 .29 .07 .69 .04 .04 .09 .25

Learner–SystemLSI Item 1 .22 .13 .19 .32 .02 .08 .17 .69LSI Item 2 .13 .15 .16 .50 .02 .06 .19 .62LSI Item 3 .27 .22 .29 .12 .33 .08 − .01 .52LSI Item 4 .25 .35 .14 .14 .21 − .05 .17 .52

Course flexibilityCF Item 1 .12 .05 .82 .08 .08 .02 .17 .08CF Item 2 .06 .09 .79 .04 .06 .04 .12 .13CF Item 3 .07 − .01 .78 − .03 .07 .08 .07 .18CF Item 4 .05 .08 .72 .04 .07 − .01 .03 − .08CF Item 5 .19 .14 .67 .12 .40 .06 .17 .16CF Item 6 .23 .16 .50 .04 .08 .02 .17 .08

Skill levelSkill level Item 1 .00 − .01 .07 − .02 .05 .92 .08 .03Skill level Item 2 .06 − .01 .10 − .02 .05 .91 .06 − .01Skill level Item 3 .07 .00 .01 .04 .02 .88 .06 .08

AttitudeAttitude Item 1 .14 .07 .20 .11 .07 .10 .86 .09Attitude Item 2 .13 .07 .22 .11 .03 .10 .81 .16Attitude Item 3 .01 .06 .12 .11 .21 .03 .80 .08

Perceived learningP. learning Item 1 .86 .22 .08 .11 .09 .03 .06 .05P. learning Item 2 .85 .21 .10 .11 .02 .00 .01 .05P. learning Item 3 .82 .24 .11 .07 .06 .07 .03 .11P. learning Item 4 .80 .20 .11 .14 .07 .13 − .00 .19P. learning Item 5 .77 .25 .11 .14 .11 .04 .08 .13P. learning Item 6 .68 .33 .07 .12 .31 − .03 .07 .23P. learning Item 7 .60 .27 .21 .18 .47 − .00 .06 − .02P. learning Item 8 .58 .31 .07 .15 .46 − .01 .09 .11P. learning Item 9 .47 .38 .11 .43 .15 − .02 .15 .19

SatisfactionSatisfaction Item 1 .29 .12 .34 .21 .67 .09 .16 .03Satisfaction Item 2 .39 .16 .37 .20 .65 .06 .13 .09Satisfaction Item 3 .07 .06 .16 .04 .56 .04 .14 .48Satisfaction Item 4 .14 .11 .53 .10 .49 .10 .22 .19

859J.B. Arbaugh, R. Benbunan-Fich / Decision Support Systems 43 (2007) 853–865

Table 3Cronbach's Alpha coefficient for each scale

Cronbach's α coefficient

Learner–Instructor .92LII Item 1 .91LII Item 2 .91LII Item 3 .90LII Item 4 .91LII Item 5 .92LII Item 6 .91LII Item 7 .91

Learner–Learner interaction .85LLI Item 1 .82LLI Item 2 .82LLI Item 3 .79LLI Item 4 .81

Learner–System .80LSI Item 1 .72LSI Item 2 .72LSI Item 3 .77LSI Item 4 .79

Course flexibility .87CF Item 1 .83CF Item 2 .84CF Item 3 .85CF Item 4 .87CF Item 5 .84CF Item 6 .86

Skill level .90Skill level Item 1 .83Skill level Item 2 .85Skill level Item 3 .90

Attitude .85Attitude Item 1 .72Attitude Item 2 .76Attitude Item 3 .86

Perceived learning .94P. learning Item 1 .93P. learning Item 2 .93P. learning Item 3 .93P. learning Item 4 .93P. learning Item 5 .93P. learning Item 6 .93P. learning Item 7 .93P. learning Item 8 .93P. learning Item 9 .94

Satisfaction .83Satisfaction Item 1 .86Satisfaction Item 2 .74Satisfaction Item 3 .74Satisfaction Item 4 .77

860 J.B. Arbaugh, R. Benbunan-Fich / Decision Support Systems 43 (2007) 853–865

the student survey items in an unrotated factor analysis.This analysis produced eight factors with no singlefactor accounting for the majority of variance, thusreducing concerns about common method variance [16].A factor analysis with varimax rotation was used tocompute the factor loadings. The results show that allthe items measuring each of the eight perceptualvariables clearly loaded onto a separate factor (seeTable 2). These loadings range from .50 to .86 in-dicating a strong correlation between each of the itemsand the variable it measured, and providing evidence ofgood construct validity. In addition, with the exceptionof one item on the satisfaction scale, every item loadsmore strongly on its corresponding construct than onthe other constructs, which indicates good discriminantvalidity.

To determine convergent validity, we analyzed thesingle item reliabilities and the overall constructreliability with Cronbach's α coefficients. As shown inTable 3, all the item reliabilities are above the cutoff rateof .7 recommended by Nunnally [35], giving highlysatisfactory levels of reliability.

Table 4 shows the correlations between the differentconstructs. The only moderately high correlations arefound between perceived learning and learner–instruc-tor interaction (r=.58) and between course flexibilityand satisfaction with medium (r= .66). Overall, less than5% of the correlations between variables are above .5 ineither direction (3 of 66). The two dependent variables(learning perception and medium satisfaction) are notcorrelated.

4.2. Hypotheses tests

The variables of interest in this study are measured atthe individual level (participant characteristics andperceptions). However, participants were nested insections when they experienced the online environment(i.e. when they received the treatment). Because of thisgrouping, it is likely that their perceptions are correlated,and thus not entirely independent from each other. Instatistical terms, group effects (or course-section effects)imply that dependent variables measured at theparticipant level are not independent and the errorterms contain a systematic component [36].

Traditional regression models are predicated on theassumption of independence of error terms. Yet, in thiscase, this assumption is violated. Ignoring the groupeffect could lead us to falsely reject the null hypothesis(no effect of treatment). Thus, to control for the fact thatparticipants are nested units in the course-sections andthese sections are, in turn, grouped in different

Table 4Descriptive statistics and correlations among study variables

Variable Mean S.D. 1 2 3 4 5 6 7 8 9 10 11

1 Student learning 5.21 1.172 Satisfaction with delivery medium 4.96 1.29 .003 Student age 31.91 6.34 − .02 − .074 Student gender .41 .49 − .06 .03 − .115 Student skill level 5.86 1.00 .06 .15 − .19 − .046 Prior internet courses 2.36 2.09 .05 .18 − .03 − .17 .157 Course site usage 233.8 172.5 .17 − .02 .10 − .06 .02 .038 Attitude toward courseware 5.37 1.23 .14 .42 − .08 .09 .19 .22 .039 Learner–Instructor interaction 4.83 1.24 .58 .35 − .06 − .03 .03 .07 .20 .2610 Learner–Learner interaction 5.04 1.38 .37 .36 − .06 .02 .04 .05 .21 .31 .5511 Learner–Interface interaction 4.55 1.61 .31 .45 − .14 .11 .14 .23 .17 .39 .46 .6012 Perceived flexibility 5.46 1.27 .17 .66 − .09 .01 .15 .22 .01 .40 .33 .25 .40

Note: Correlations above .09 are significant at the pb .05 level.

Table 5HANOVA results on interaction variables

n Model 1 Model 2 Model 3

Learner–Instructorinteraction,means

Leaner–Learnerinteraction,means

Learner–Systeminteraction,means

Cell meansObjectivist/Individual

36 3.45 (1.23) 3.31 (1.37) 3.21 (1.35)

Objectivist/Collaborative

191 5.23 (1.04) 5.30 (1.15) 4.67 (1.21)

Constructivist/Individual

128 4.86 (1.27) 4.56 (1.61) 3.95 (1.49)

Constructivist/Collaborative

221 4.84 (1.31) 5.35 (1.34) 4.53 (1.24)

Marginal meansObjectivist 227 4.95 (1.26) 4.99 (1.39) 4.44 (1.34)Constructivist 349 4.85 (1.29) 5.06 (1.38) 4.31 (1.36)Individual 164 4.55 (1.39) 4.29 (1.64) 3.79 (1.49)Collaborative 412 5.02 (1.21) 5.33 (1.14) 4.59 (1.22)

HANOVAresults

Learner–Instructorinteraction, F

Leaner–Learnerinteraction, F

Learner–Systeminteraction, F

Model 11.58⁎⁎⁎ 9.50⁎⁎⁎ 5.17⁎⁎⁎

K. Construction 3.52+ 6.25⁎ 2.83G. Collaboration 8.36⁎⁎ 17.59⁎⁎⁎ 16.58⁎⁎⁎

KC⁎GG 7.55⁎⁎ 4.85⁎ 4.46⁎

df 39, 535 39, 536 39, 537R2 .46 .41 .27

Notes:Standard deviations in parentheses.Significance levels: +pb .10, ⁎pb .05, ⁎⁎pb .01, ⁎⁎⁎pb .001.

861J.B. Arbaugh, R. Benbunan-Fich / Decision Support Systems 43 (2007) 853–865

treatments, Multilevel or Hierarchical Models such asHierarchical ANOVAs or Fixed Effect regressions,should be used [19].

To test the hypotheses about the influence of thedifferent types of online environments in various par-ticipant interaction metrics, we used a Hierarchicalanalysis of variance (HANOVA). Consistent with ourfourfold typology, the model has two main factors(Knowledge Construction – objectivist vs. constructiv-ist, and Group Collaboration – individual vs. group).Walczuch and Watson [48] recommend using the groupeffect instead of the standard error as the error term forcalculating the main effects. In the presence of sig-nificant group effects, the use of the standard error tocalculate the F-statistic may overrate the significance ofthe main and interaction factors in the model. Therefore,a more conservative and correct estimation is producedwhen the group effect is used as the error term. This isthe approach we use to report the results.

Table 5 shows the means of the participant inter-action variables in each type of environment. Ac-cording to H1, participants in objectivist courses wereexpected to report higher levels of learner–instructorinteraction. While the marginal cell means indicatethat those in objectivist courses report higher levels ofinteraction with their instructor than participants incollaborative courses (4.95 vs. 4.85, respectively), theknowledge construction factor is only marginallysignificant in the first HANOVA model (F=3.52;pb .10). Thus, H1 is not supported at the conventional5% significance level.

H2 predicted that participants in collaborativeenvironments would report higher levels of learner–learner interaction than students in individualistic set-tings. The marginal means of the collaborative vs. indi-vidual students support this prediction, and the group

collaboration factor in the second HANOVA model issignificant (F=17.59; pb .001). H3 associated partici-pants in collaborative environments with higher levelsof learner–system interaction. The marginal mean of thecollaborative conditions is significantly higher than the

862 J.B. Arbaugh, R. Benbunan-Fich / Decision Support Systems 43 (2007) 853–865

marginal mean of students in individual conditions (4.59vs. 3.79), and this hypothesis is supported (F=16.58;pb .001).

In order to test the influence of the participant inter-action measures on the dependent variables (H4a H4bH5a H5b H6a H6b), we conducted fixed effect re-gressions. This multilevel analysis technique allows usto control for the effects of having groups of participantsenrolled in different sections and these sections assignedto different conditions. Here again, although both set ofvariables (participant interactions and learning out-comes) are measured at the individual level, the ob-servations can not be considered independent becauseparticipants were enrolled in sections when theyexperienced the treatment and therefore, their responsesare correlated. In this case, the assumption of indepen-dence of error terms required by traditional OLSregression is violated due to the systematic componentsintroduced by the course/section [36].

Model 1 on Table 6 shows the regression results ofthe participant level variables on learning perception.Contrary to our predictions, learner–learner interactionis not significantly associated with higher perceptions oflearning. This result does not support H4a. However, aspredicted, more interaction with the instructor isassociated with better learning perception ( pb .001),which supports H5a. Likewise, the coefficient forlearner–system interaction is positive and significant( pb .05), indicating that higher levels of interaction withthe system are significantly related to better theperception of learning. Thus, H6a is supported.

Table 6Results of fixed effects regressions

Model 1: Perceivedlearning

Model 2: Mediumsatisfaction

Intercept 1.85 ⁎⁎⁎ (.50) −1.72⁎⁎ (.52)Learner–Learnerinteraction

.06 (.06) .09+ (.05)

Learner–Instructorinteraction

.51⁎⁎⁎ (.05) .06 (.05)

Learner–System interaction .12⁎ (.05) .27⁎⁎⁎ (.05)Age .001 (.007) .01 (.01)Gender − .13 (.09) − .00 (.10)Computer/Internetskill level

.03 (.05) .07 (.05)

Prior Internet courses − .02 (.02) .008 (.03)Student effort .000 (.000) − .001⁎⁎⁎ (.000)Attitude towardcourse software

− .03 (.04) .18⁎⁎⁎ (.05)

Perceived flexibility − .07+ (.04) .62⁎⁎⁎ (.05)df 39, 520 39, 520

Notes:Regression coefficients and standard errors in parentheses.Significance levels: +pb .10, ⁎pb .05, ⁎⁎pb .01, ⁎⁎⁎pb .001.

Model 2 of Table 6 shows the results on mediumsatisfaction. Higher levels of interaction among learnersresult in marginally higher levels of satisfaction with themedium ( pb .10). Although in the expected direction,this result does not meet the significance threshold tosupport H4b. In addition, the lack of significance of thelearner–instructor interaction coefficient indicates thatthis type of interaction does not result in more sat-isfaction with the medium. Hence, H5b is not supported.However, higher levels of learner–system interactionare positively and significantly associated with mediumsatisfaction, a result that supports H6b.

5. Discussion

This study provides evidence of the importance ofparticipant interaction for the success of online environ-ments. With these results, we help answer recent callsfor more research into contextual factors that influenceonline learning effectiveness [5,32]. Building upon priorresearch, we measured participant interaction in terms oflearner–learner, learner–instructor and learner–systeminteraction. These measures were collected on a largesample of students taking different online MBA courses.To understand how and why these participant interactionmetrics varied across courses, we interviewed the in-structors in order to classify each section in the sampleaccording to the teaching approach (objectivist vs. con-structivist) and the nature of the learning activities(individual vs. group). Our analysis showed that stu-dents in collaborative courses experience higher levelsof learner–learner and learner–system interaction thanthose in more individualistic sections.

Contrary to our expectations, learner–learner interac-tion did not have a significant effect on learning per-ception. There are several possible explanations for thisnon-significant finding. From a methodological stand-point, the scale we used measures the extent to whichstudents communicated (or exchanged information) witheach other and not necessarily the learning value of thoseexchanges. Consistent with this scale-related issue, it isalso possible that students derived more socio-emotionalsupport from the ability to communicate with fellowclassmates than learning advantages. This explanation ispartially confirmed by the marginally significant andpositive coefficient of learner–learner interaction onmedium satisfaction. Consequently, the primary by-products of learner–learner interaction are the develop-ment of enhanced virtual team skills or improved onlinelearning skills rather than a better understanding of thematerial to be learned [3,32,44]. Notwithstanding theseexplanations, the lack of support for the learner–learner

863J.B. Arbaugh, R. Benbunan-Fich / Decision Support Systems 43 (2007) 853–865

related hypotheses does suggest that the assumption thatcollaborative learning is the best mode for structuringonline environments [28] warrants further examination[33,45].

Overall, when we analyzed the effects of the differenttypes of interaction on the dependent variables, wefound that only learner–instructor and learner–systemare significantly associated with higher perceptions oflearning. This result is consistent with prior research andunderscores the importance of instructors, regardless ofthe type of online environment [4,15,32]. In addition,the analysis of participant interaction variables onmedium satisfaction suggests that the level of engage-ment of participants with the system results in higherlevels of satisfaction with the online medium.

It is important to point out that while other re-searchers have used general measures of satisfactionwith the learning experience, this study uses a morespecific indicator to directly measure the satisfactionwith the online medium. Recent research suggests thatmeasuring learning perception allows one to assess themastery of the content, while measuring satisfactionallows for assessing the process of learning online [3].Satisfaction with the delivery medium is particularlyimportant when considering virtual learning environ-ments because if students are not satisfied with theonline course experience, they could opt out of onlinecourses or transfer to another institution. Consideringthe human and capital investments many institutionshave made in developing online learning capabilities,losing dissatisfied online learners would likely make itmore difficult to recoup these costs [5,45].

Another methodological difference between ourstudy and prior research is the test of hypotheses con-trolling for course-section effects. Our sample consistsof multiple courses and yet our variables of interest aremeasured at the individual level. In order to take intoaccount the systematic errors introduced by the fact thatparticipants were nested units in sections and theirperceptions may be correlated (group level effects), weused multilevel analysis techniques. In fact, the resultsof participant interaction variables are obtained fromhierarchical analysis of variance and from fixed effectsregressions, which control for the systematic errorsintroduced by the nesting of individuals in class sections[19,36,48].

5.1. Limitations

To put our results in context, we must acknowledgeseveral limitations of this study. First, although thesample consists of forty sections, the sampling frame

was a single MBA program. Therefore, the generaliz-ability of the findings to other kinds of programs anduniversity settings may be limited. Second, in our ana-lyses, we did not take explicitly into account the par-ticular characteristics of the subject matter taught inthese courses, neither did we include a specific scale tomeasure learner–content interaction in the question-naire. Third, since this is mainly a quantitative study, itis likely that qualitative sources of data such as contentanalysis of interactions, such as those in Benbunan-Fichet al. [10] and Gunawardena et al. [21], or interviewswith participants would add more depth to our findings.We welcome the efforts of future researchers to refineand improve on these methods and measures.

5.2. Implications

After analyzing three different types of interaction inonline environments, two of them stand out as criticalfor the achievement of better outcomes, namely learner–instructor and learner–system interaction. Students inmost of the courses in our sample, regardless of the typeof online environment used for those courses, reportedhigh levels of interaction with the instructor and thistype of interaction was found to have a significant andpositive effect on learning perception. The direct im-plication of this finding is that instructors must beengaged with their courses to enable participants toobtain better learning outcomes, regardless of the typeof environment.

According to our findings, the only interaction vari-able conducive to better outcomes in both dependentvariables (learning perception and medium satisfaction)is the learner–system interaction. This result attests to theimportance of the system that supports the delivery ofonline instruction and the attitude of the participants withrespect to the technology. Further research shouldexplore how the learners' general Internet self-efficacyand courseware specific self-efficacy beliefs affectattitudes, intentions and actual usage, by extending thefindings of Hsu and Chiu [26] to particular onlinelearning environments. Future efforts should also ex-amine how the perceptions of learner–system interactiondepend upon specific design characteristics of course-ware systems such as usability, quality and reliability,and how these systems can be further improved in termsof their context following the framework developed inBorges et al. [13]. Our learner–system interaction resultssuggest that in order to be successful, online environ-ments require participants who are willing to intensivelyinteract with the technology, and user-friendly andreliable systems to support this interaction.

864 J.B. Arbaugh, R. Benbunan-Fich / Decision Support Systems 43 (2007) 853–865

6. Conclusion

Using a large sample of students from forty onlineMBA course-sections, this study investigated differenttypes of participant interaction. In particular, we exam-ined learner–learner, learner–instructor and learner–system and their effects on learning perception andmedium satisfaction. Despite expected differences inparticipant interaction variables depending on the typeof online environment, this study found that participantswho are more engaged with the system tend to be moresatisfied with the medium and report better perceptionsof learning.

References

[1] M. Alavi, Computer-mediated collaborative learning: an empir-ical evaluation, MIS Quarterly 18 (1994) 159–174.

[2] J.B. Arbaugh, Virtual classroom characteristics and studentsatisfaction with internet-based MBA courses, Journal ofManagement Education 24 (1) (2000) 32–54.

[3] J.B. Arbaugh, Learning to learn online: a study of perceptualchanges between multiple online course experiences, Internet andHigher Education 7 (3) (2004) 169–182.

[4] J.B. Arbaugh, How much does “subject matter” matter? A studyof disciplinary effects in on-line MBA courses, Academy ofManagement Learning and Education 4 (2005) 57–73.

[5] J.B. Arbaugh, R. Benbunan-Fich, Contextual factors thatinfluence ALN effectiveness, in: S.R. Hiltz, R. Goldman(Eds.), Learning Together Online: Research on AsynchronousLearning Networks, Lawrence Erlbaum Publishers, Mahwah, NJ,2005, pp. 123–144.

[6] J.B. Arbaugh, A. Hwang, Does “teaching presence” exist in onlineMBA courses? Internet and Higher Education 9 (1) (2006) 9–21.

[7] R. Benbunan-Fich, Improving education and training withInformation Technology, Communications of the ACM 45 (6)(2002) 94–99.

[8] R. Benbunan-Fich, J.B. Arbaugh, Separating the effects ofknowledge construction and group collaboration, Informationand Management 33 (6) (2006) 778–793.

[9] R. Benbunan-Fich, S.R. Hiltz, Mediators of the effectiveness ofonline courses, IEEE Transactions on Professional Communica-tion 46 (4) (2003) 298–312.

[10] R. Benbunan-Fich, S.R. Hiltz, M. Turoff, A comparative contentanalysis of face-to-face vs. asynchronous group decision making,Decision Support Systems 34 (4) (2003) 457–469.

[11] Z.L. Berge, Facilitating computer conferencing: recommenda-tions from the field, Educational Technology 15 (1) (1995)22–30 (Available at: http://www.emoderators.com/moderators/teach_online.html).

[12] R.M. Bernard, P.C. Abrami, Y. Lou, E. Borokhovski, A. Wade, L.Wozney, P.A. Wallet, M. Fiset, B. Huang, How does distanceeducation compare with classroom instruction? A meta-analysisof the empirical literature, Review of Educational Research 74(2004) 379–439.

[13] M.R.S. Borges, P. Brézillon, J.A. Pino, J.-C. Pomerol, Dealingwith the effects of context mismatch in group work, DecisionSupport Systems, in press. (Corrected proof available online 16October 2006).

[14] S.F. Clouse, G.E. Evans, Graduate business students' perfor-mance with synchronous and asynchronous interaction e-learning methods, Decision Sciences Journal of InnovativeEducation 1 (2003) 181–202.

[15] N.W. Coppola, S.R. Hiltz, N.G. Rotter, Becoming a virtualprofessor: pedagogical roles and asynchronous learning net-works, Journal of Management Information Systems 18 (4)(2002) 169–189.

[16] D.H. Doty, W.H. Glick, Common methods bias: does commonmethods variance really bias results? Organizational ResearchMethods 1 (1998) 374–406.

[17] K.L. Dowling, R.D. St. Louis, Asynchronous implementation ofthe nominal group technique: is it effective? Decision SupportSystems 29 (3) (2000) 229–248.

[18] J. Fjermestad, An analysis of communication mode in groupsupport systems research, Decision Support Systems 37 (2)(2004) 239–263.

[19] M. Gallivan, R. Benbunan-Fich, A framework for analyzing levelsof analysis issues in studies of e-collaboration, IEEE Transactionson Professional Communication 48 (1) (2005) 87–104.

[20] D.R. Garrison, T. Anderson, W. Archer, Critical inquiry in a text-based environment: computer conferencing in higher education,Internet and Higher Education 2 (2000) 87–105.

[21] C.N. Gunawardena, C.A. Lowe, T. Anderson, Analysis of aglobal online debate and the development of an interactionanalysis model for examining social construction of knowledgein computer conferencing, Journal of Educational ComputingResearch 17 (4) (1997) 397–431.

[22] C.N. Gunawardena, F.J. Zittle, Social presence as a predictor ofsatisfaction within a computer-mediated conferencing environ-ment, American Journal of Distance Education 11 (3) (1997)8–26.

[23] D.C. Hillman, D.J. Willis, C.N. Gunawardena, Learner–interfaceinteraction in distance education: an extension of contemporarymodels and strategies for practitioners, American Journal ofDistance Education 8 (2) (1994) 30–42.

[24] S.R. Hiltz, The Virtual Classroom: Learning Without Limits ViaComputer Networks, Ablex Publishing Corporation, Norwood,NJ, 1994.

[25] S.R. Hiltz, M. Turoff, What makes learning networks effective?Communications of the ACM 45 (4) (2002) 56–59.

[26] M.H. Hsu, C.M. Chiu, Internet self-efficacy and electronicservice acceptance, Decision Support Systems 38 (3) (2004)369–381.

[27] A. Hwang, J.B. Arbaugh, Virtual and traditional feedback-seeking behaviors: underlying competitive attitudes and conse-quent grade performance, Decision Sciences Journal of Innova-tive Education 4 (1) (2006) 1–28.

[28] D. Jonassen, M. Davidson, M. Collins, J. Campbell, B.B. Haag,Constructivism and computer-mediated communication in dis-tance education, American Journal of Distance Education 9 (2)(1995) 7–26.

[29] M. Khalifa, R.C.-W. Kwok, Remote learning technologies:effects of hypertext and GSS, Decision Support Systems 26 (3)(1999) 195–207.

[30] D.E. Leidner, M. Fuller, Improving student learning ofconceptual information: GSS supported collaborative learningvs. individual constructive learning, Decision Support Systems20 (2) (1997) 149–163.

[31] D.E. Leidner, S.L. Jarvenpaa, The use of information technologyto enhance management school education: a theoretical view,MIS Quarterly 19 (1995) 265–291.

865J.B. Arbaugh, R. Benbunan-Fich / Decision Support Systems 43 (2007) 853–865

[32] X. Liu, C.J. Bonk, R.J. Magjuka, S.H. Lee, B. Su, Exploring fourdimensions of online instructor roles: a program level case study,Journal of Asynchronous Learning Networks 9 (4) (2005)Retrieved December 30, 2005 from http://www.sloan-c.org/publications/jaln/v9n4/v9n4_liu_member.asp.

[33] R.B. Marks, S. Sibley, J.B. Arbaugh, A structural equation modelof predictors for effective online learning, Journal of Manage-ment Education 29 (2005) 531–563.

[34] M.G. Moore, Editorial: Three types of interaction, AmericanJournal of Distance Education 3 (2) (1989) 1–4.

[35] J. Nunnally, Psychometric Theory, McGraw Hill, New York, NY,1978.

[36] S. Raudenbusch, A. Bryk, Hierarchical Linear Models: Applica-tions and Data Analysis Methods, Sage Publications, ThousandOaks, CA, 2002.

[37] J.C. Richardson, K. Swan, Examining social presence in onlinecourses in relation to students' perceived learning and satisfac-tion, Journal of Asynchronous Learning Networks 7 (1) (2003)Retrieved June 1, 2004 from http://www.aln.org/publications/jaln/v7n1/index.asp.

[38] A.P. Rovai, Sense of community, perceived cognitive learning,and persistence in asynchronous learning networks, Internet andHigher Education 5 (2002) 319–332.

[39] P.J. Shea, E.E. Fredericksen, A.M. Pickett, W.E. Pelz, Apreliminary investigation of “teaching presence” in the SUNYlearning network, in: J. Bourne, J.C. Moore (Eds.), Elements ofQuality Online Education: Practice and Direction, vol. 4, SloanCenter for OnLine Education, Needham, MA, 2003, pp. 279–312.

[40] A.C. Sherry, C.P. Fulford, S. Zhang, Assessing distance learners'satisfaction with instruction: a quantitative and a qualitativemeasure, American Journal of Distance Education 12 (3) (1998)4–28.

[41] C.-L. Sia, B.C.Y. Tan, K.-K. Wei, Effects of GSS interface andtask type on group interaction: an empirical study, DecisionSupport Systems 19 (4) (1997) 289–299.

[42] J. Siemer, M.C. Angelides, A comprehensive method for theevaluation of complete intelligent tutoring systems, DecisionSupport Systems 22 (1) (1998) 85–102.

[43] B. Su, C.J. Bonk, R. Magjuka, X. Liu, S.H. Lee, The importanceof interaction in web-based education: a program-level case studyof online MBA courses, Journal of Interactive Online Learning 4(1) (2005) Retrieved December 30, 2005 from http://www.ncolr.org/jiol/issues/PDF/4.1.1.pdf.

[44] K. Swan, Building learning communities in online courses: theimportance of interaction, Education Communication andInformation 2 (1) (2002) 23–49.

[45] M.K. Tallent-Runnels, J.A. Thomas, W.Y. Lan, S. Cooper, T.C.Ahern, S.M. Shaw, X. Liu, X. Teaching courses online: a reviewof the research, Review of Educational Research 76 (2006)93–135.

[46] R.L. Thompson, C.A. Higgins, J.M. Howell, Personal comput-ing: toward a conceptual model of utilization, MIS Quarterly 15(1991) 125–143.

[47] E.D. Wagner, In search of a functional definition of interaction,American Journal of Distance Education 8 (2) (1994) 6–26.

[48] R.M. Walczuch, R.T. Watson, Analyzing group data in MISresearch: including the effect of the group, Group Decision andNegotiation 10 (1) (2001) 83–94.

[49] D. Xu, H. Wang, Intelligent agent supported personalization forvirtual learning environments, Decision Support Systems 42 (2)(2006) 825–843.

[50] Y. Zhao, J. Lei, B. Yan, C. Lai, H.S. Tan, What makes thedifference? A practical analysis of research on the effectivenessof distance education, Teachers College Record 107 (8) (2005)1836–1884.

J. B. Arbaugh is the Curwood Endowed Professor and a Professor ofStrategy and Project Management at the University of WisconsinOshkosh. He is an Associate Editor of Academy of ManagementLearning and Education. Ben's research interests are in onlinemanagement education, international entrepreneurship, the manage-ment of rapidly growing firms, and the intersection betweenspirituality and strategic management research. Some of his recentpublications include articles in Academy of Management Learning andEducation, Decision Sciences Journal of Innovative Education,Management Learning, the Journal of Management, Spirituality,and Religion, the Journal of Enterprising Culture, and the Journal ofManagement Education.

Raquel Benbunan-Fich is an Associate Professor at the SCISDepartment in the Zicklin School of Business, Baruch College, CityUniversity of New York. She received her Ph.D. in ManagementInformation Systems from Rutgers University – Graduate School ofManagement. Her research interests include educational applicationsof computer-mediated communication systems, Asynchronous Learn-ing Networks, evaluation of Web-based systems and e-commerce. Shehas published articles on related topics in Communications of theACM, Decision Support Systems, Group Decision and Negotiation,IEEE Transactions on Professional Communication, Informationand Management, International Journal of Electronic Commerce,Journal of Computer Information Systems and other journals.