15
JI. of Technology and Teacher Education (2007) 15(2), 233-246 Two Peas in a Pod? A Comparison of Face-to-Face and Web-Based Classrooms GALE A. MENTZER, JOHNROBERT CRYAN, AND BERHANE TECLEHAIMANOT Universio, of Toledo Toledo, OH USA [email protected] [email protected] btecleh @utnet.utoledo.edu This study compared student learning outcomes and student perceptions of and satisfaction with the learning process be- tween two sections of the same class-a web-based section and a traditional face-to-face (f2f) section. Using a quasi-ex- perimental design, students were randomly assigned to the two course sections. Group equivalency was established us- ing an instrument designed to determine learning preferences and both versions of the course were delivered by the same instructor. Student learning outcomes compared student test grades and overall grades (included all assignments). To measure student perceptions of student-teacher interactions as well as satisfaction with the course as a whole, identical, end-of-semester evaluations were completed and compared. Finally, to provide an unbiased measure of student-teacher interaction, a modified interaction analysis instrument based upon the work of N. Flanders was used. Findings revealed that student performance on tests was equivalent; however student final grades were lower in the web-based course due to incomplete assignments. Classroom interaction analysis found differences due to delivery methods. Finally, while all student perceptions of the course and the instructor were above average, the f2f group rated both variables statistically significantly higher. Conclusions suggest that the f2f en- counter motivates students to a higher degree and also pro- vides students with another layer of information concerning the instructor that is absent in the web-based course.

Peas in a pod

Embed Size (px)

DESCRIPTION

Acumen Website Articles

Citation preview

Page 1: Peas in a pod

JI. of Technology and Teacher Education (2007) 15(2), 233-246

Two Peas in a Pod? A Comparison of Face-to-Face andWeb-Based Classrooms

GALE A. MENTZER, JOHNROBERT CRYAN, AND BERHANETECLEHAIMANOTUniversio, of Toledo

Toledo, OH [email protected]@utnet.utoledo.edubtecleh @utnet.utoledo.edu

This study compared student learning outcomes and studentperceptions of and satisfaction with the learning process be-tween two sections of the same class-a web-based sectionand a traditional face-to-face (f2f) section. Using a quasi-ex-perimental design, students were randomly assigned to thetwo course sections. Group equivalency was established us-ing an instrument designed to determine learning preferencesand both versions of the course were delivered by the sameinstructor. Student learning outcomes compared student testgrades and overall grades (included all assignments). Tomeasure student perceptions of student-teacher interactionsas well as satisfaction with the course as a whole, identical,end-of-semester evaluations were completed and compared.Finally, to provide an unbiased measure of student-teacherinteraction, a modified interaction analysis instrument basedupon the work of N. Flanders was used. Findings revealedthat student performance on tests was equivalent; howeverstudent final grades were lower in the web-based course dueto incomplete assignments. Classroom interaction analysisfound differences due to delivery methods. Finally, while allstudent perceptions of the course and the instructor wereabove average, the f2f group rated both variables statisticallysignificantly higher. Conclusions suggest that the f2f en-counter motivates students to a higher degree and also pro-vides students with another layer of information concerningthe instructor that is absent in the web-based course.

Page 2: Peas in a pod

Mentzer, Cryan, and Teclehaimanot

Recent research in online education has focused upon whether web-based courses provide students with the same degree of personalized learn-ing and content mastery that students experience in face-to-face (f2f) classes(Parkinson, Greene, Kim, & Marioni, 2003). While the trend is moving to-wards more rigorous design, few studies to date, however, used experimen-tal design across several variables including student learning as well as satis-faction with the learning experience (Meyer, 2003). The purpose of thisstudy was to compare student learning outcomes and student perceptions ofand satisfaction with the learning process between two sections of the sameundergraduate class-Early Childhood Education: Philosophy and Prac-tice-a web-based section and a traditional f2f section. The f2f sections ofthe course are typically offered during the weekdays and one section in thelate afternoon. The web-based section is offered "anytime, anywhere."

Background

Progress and innovative use of technology in education has greatly im-proved the quality of web-based delivered courses (Schott, Chernish,Dooley, & Lindar, 2003). To determine whether web-based courses indeedprovide students with a comparable if not more superior learning experi-ence, researchers over the past five years have conducted a plethora of stud-ies comparing aspects of the traditionally delivered instruction with onlineinstruction (Rivera, McAlister, & Rice, 2002). Findings from this body ofresearch are mixed, but the general consensus is that students learn just aswell using web-based instruction, but are less satisfied with the learning ex-perience. Miller, Rainer, and Corley (2003) noted that the more negative as-pects experienced by students of web-based instruction include procrastina-tion, poor attendance, and a sense of isolation. Another study noted that on-line courses are more effective with particular personality types (Daughen-baugh, R., Daughenbaugh, L, Surry, & Islam, 2002) and the Office of Insti-tutional Planning and Research at Sinclair Community College (2000) foundthat distance learning students achieve lower grades than those who attendf2f classes. Their study suggested that the distance learning students are typ-ically of a different ilk or from a different population than the traditional stu-dent and have other obligations to juggle along with attending school.

Because the majority of studies noted compare existing online courseswith existing f2f courses, selection may threaten the internal validity of thefindings. When existing courses are used, the students themselves enroll ineither the online or the f2f class thereby selecting to which group they will

234

Page 3: Peas in a pod

Two Peas in a Pod?

belong. This may result in a comparison of nonequivalent groups. Using ran-dom assignment can minimize this threat thereby producing results that aremore indicative of treatment effects rather than group differences. To deter-mine whether the "typical" student might fare just as well in an onlinecourse as in an f2f course, this study randomly assigned students to one ofthe two sections in order to compare equivalent groups thereby controllingfor predispositions towards one type of learning style over another.

The course in this study, Early Childhood Education: Philosophy andPractice, is an entry level survey course required for early childhood educa-tion (ECE) majors who just entered their preprofessional program (first yearstudents). The host university is a medium-sized campus (20,000+ students)and the College of Education enrolls 2000+ students. Currently there aremore than 900 students enrolled in the Bachelor of Education in EarlyChildhood Education program, which prepares students to teach childrenages 3-8 with a variety of learning styles including those at-risk, typicallydeveloping, mild to moderately disabled, and gifted. The f2f sections of thecourse are scheduled to meet twice weekly in seminar fashion while the on-line courses can be accessed at any time. Content covered in all sections ofthe course ranges from ECE history, theorists, curriculum, inclusive learningenvironments, designing and planning themes, evaluation, and parent in-volvement. Central to the course is the development of reflective thinkingand application to reflective practice.

In an attempt to make both sections of the course "equivalent" in termsof the teaching-learning process, the instructor used duplicate syllabi includ-ing duplicate assignment requirements. For purposes of equating attendancerates, students in the web-based section were required to attend at least two"Live Chat" sessions per week. These Live Chats (1 hour each) served to re-place the class discussion in the f2f section. Students in both sections weregiven equal credit for attendance and the final weight for attendance in bothsections was the same. Further, all students in both sections were assigned tosmall groups for in-class assignments (four to six students). Whereas f2f stu-dents met regularly in class, those in the web-based class were required tomeet together online in group chat rooms to discuss the same small groupassignments that were given to the f2f students during lecture. Small groupsin both sections were required to submit summaries of the small group dis-cussions Again, the attempt was made to control for similar experiences inboth sections. Additionally, all assignment due dates were the same and car-ried the same weight in the overall grading scheme. Discussion topics andreading assignments for each f2f meeting were the same as those for thescheduled Live Chats in the web-based section. The instructor4's lecture

235

Page 4: Peas in a pod

Mentzer, Cryan, and Teclehaimanot

notes were reproduced for each week and shared with both sections on thefirst day of the week. Printed versions were given out in the f2f section,Posted versions were available for the web-based section. Lastly, the in-structor attempted to replicate the same instructional strategies throughoutthe entire semester; those taken from the effective teaching literature, whichcorrelate with increased student learning such as seeking to encourage andmaximize student questions and student to student dialogue while minimiz-ing extended lecture. Wherever possible the instructor used "praise" of stu-dent answers and encouraged them to do the same toward peers.

METHOD

Often students who enroll in web-based courses have a predispositiontowards this means by which to learn (Diaz & Cartnall, 1999). In otherwords, they choose online courses because they feel comfortable learningonline. In addition, students who are comfortable with their level of comput-er competency are more likely to enroll in a web-based course than thosewho are insecure with either the use of the computer or with a more genera-tive learning environment (Parkinson et al., 2003). These issues threaten theinternal validity of findings based upon comparisons between web-basedand f2f courses. The groups, by nature of learning preference and computercomfort levels, are not equivalent and therefore findings cannot be general-ized beyond the restrictions of the studies. To address this weakness, thisstudy used a quasi-experimental design that infused nonrandom selectionwith random assignment to the control (f2f) and experimental (web-based)groups. To accomplish this, all ECE students wishing to enroll in the coursewere required to contact the Department office prior to receiving access toregistration. Upon contacting the office, they were asked whether theywould be amenable to allowing the department to assign them to either thef2f or the web-based section of the course. While students volunteered toparticipate in the study, random assignment to the groups strengthened theinternal validity of the study and enhanced group equivalency. Because therewere two additional sections of the course offered during the same semester,students who declined to participate were free to enroll in either of the othersections.

To validate group equivalency, all students completed the Visual, Au-ral, Read/write, Kinesthetic (VARK)-a diagnostic instrument designed todetermine learning preferences (Fleming & Bonwell, 2002). VARK reliabil-

236

Page 5: Peas in a pod

Two Peas in a Pod?

ity and validity indices are currently under research. Using the VARK, stu-dents can be classified with mild, strong, or very strong preferences in anyof the four learning styles. In addition, students can show multimodal ten-dencies (more than one style appears to be preferred). For the purposes ofthis study, students were classified in one of five categories-visual, aural,read/write, kinesthetic, and multimodal learners. The frequencies of VARKlearning style preferences in each group were then compared using a chisquare goodness-of-fit test (using the control group frequencies as the ex-pected distribution) to determine whether group differences were statistical-ly significant (x = 0.05) rather than the result of sampling error.

To control other confounding variables that might result from the deliv-ery methods of two sections of the course, the same instructor taught bothsections during the same semester. The instructor took care to compare thedesign and delivery of both sections of the course to ensure that topics cov-ered, work required, testing, and the classroom experience were as closelymatched as possible. The syllabi of both courses were also compared by acolleague to provide content validity. Students enrolled in the web-basedsection, while local, did not have f2f course-related contact with the instruc-tor during the semester nor were any of these students enrolled in othercourses taught by the faculty member.

To provide an unbiased measure and comparison of student-teacher in-teraction between groups, a modified interaction analysis instrument (IA)based upon the work of Flanders (1970) was used. Flanders' IA is a system-atic method of coding spontaneous verbal communication that has been andis still currently used in classroom observation studies to examine teachinginteraction styles. The IA instrument consists of 10 categories listed in Table1 (7 used when the teacher is talking and 3 when the students talk):

Table 1Flanders' Interaction Analysis Categories

Activity CategoryTeacher talks Accepts feelings

Praises or encouragesAccepts or used ideas of pupilsAsks questionsExplainsGives directionsCriticizes

Student talks RespondsInitiatesSilence/Confusion

237

Page 6: Peas in a pod

Mentzer, Cryan, and Teclehaimanot

In addition to the original Flanders IA categories, several categorieswere added to measure student interaction more subtly. The original instru-ment categorized student interaction as simply responding, initiating a topic,or silence/confusion. The purpose of IA was to examine teaching styles. Thepurpose of this study was not to determine teaching styles, but rather to de-termine whether student participation, student-student interactions, and stu-dent-teacher interactions were similar in both groups. To this end, four cate-gories were added to the original student categories of "response," "ini-tiates" and "confusion/silence." The new categories included "validation ofothers' ideas," "praise or courtesy remarks," "questions or asks for clarifica-tion," and "silence due to 'down time'." This last category was designed toearmark extra time needed in a live chat online. Lengthy contributions in thechat room require both longer time for typing as well as for reading. In thiscase, "silence/confusion" is not an appropriate label for what is occurring.The "down time" category was used only for the web-based course and wasnot a function of comparison between groups. Down time was calculated bydetermining the amount of time it took to read a response and then doublingthat amount of time to account for composing it. It was verified by rereadinglogs of the live chats. This time was then subtracted from the full amount oftime spent in inactivity or silence to determine the amount of time to be at-tributed to "silence/confusion."

IA scoring is measured by using an observer to listen to the classroominteraction and take note of the type of interaction taking place from the listof categories. Ordinarily, the observer marks a category every three seconds.For this study, frequencies of categories were then tabulated to determinetrends by comparing categories within a session as well as the sequence be-tween categories. For example, one category sequence comparison exploredthe frequency with which teacher questions were followed by student re-sponses as opposed to being followed by silence. In this study, comparisonswere made between f2f and web-based discussions to determine whether thegeneral interaction experience between the groups varied. If it did vary, thatmight indicate that the two discussion experiences were different.

To conduct the IA, two 20-minute sessions were randomly selected andvideo-taped from all possible f2f classroom discussions. Two correspondingweb-based chat room discussions were also monitored in real time for 20minutes (observer sat in on the chat). The resulting frequencies were thencompared using a chi-square test of homogeneity to observe differences be-tween multiple variables with multiple categories. To compensate for thesometimes unwieldy nature of large chat rooms, the experimental class chatrooms were limited to 10 students per session. Two sessions on the sametopic were offered per week to accommodate this limit.

238

Page 7: Peas in a pod

Two Peas in a Pod?

Finally, the examination of student learning outcomes compared groupmeans of student test grades and overall grades using an independent t-test.Test scores (as opposed to letter grades) were used to allow for more subtlemeasurement. To measure student perceptions of student-teacher interac-tions as well as satisfaction with the course as a whole, an identical end-of-semester evaluation was completed and an independent sample t-test tocompare mean evaluation scores for the groups was calculated.

FINDINGS

Sample Information

Of the total (100+) students who enrolled in all four sections of theECE: Philosophy and Practices course, 36 agreed to participate in the ran-dom assignment to either the control or experimental group. Both sectionshad 18 students-I male and 17 females. All of the students in both sectionswere considered traditional students in that they enrolled in college right outof high school. All students were enrolled in the college's Early ChildhoodEducation- Teacher Education licensure program.

Group Equivalency

The VARK survey of learning preferences was completed by 18 stu-dents in the f2f group and 15 students in the web-based group. The f2f stu-dents completed the VARK in class while the web-based students wereasked to take the survey online. Three students in the web-based course didnot complete the VARK. The distribution of learning preferences for eachgroup is displayed in Table 2. A chi square goodness of fit test was adminis-tered using the control group as expected frequencies and the experimentalgroup as the observed frequencies. Because the chi square test examinesproportions, unequal sample sizes are acceptable. Results showed no statisti-cally significant difference between group learning preferences (X2 = 3.36;df = 4; p > 0.05). Therefore it is assumed that the group learning styles wereequivalent and that any differences in learning style preferences were due tosampling error.

239

Page 8: Peas in a pod

Mentzer, Cryan, and Teclehaimanot

Table 2Distribution of Learning Style Preferences

Group V A R K MM

F2F 1 0 2 6 9WB 1 0 1 7 6

Interaction Analysis

Results of the chi square test of homogeneity revealed that a statisticallysignificant difference did indeed exist overall between the nature of teacher/student interaction during class discussions in the two groups (Q2 = 900.035;df=-9; p < 0.001). An examination of the standardized residuals revealedwhich of the individual interaction categories contributed to the rejection ofthe null hypothesis which stated that the two groups were equal. Table 3 il-lustrates the categories/sessions that contributed to the significant X2 value.The letters in the chart indicate whether the observed frequency was signifi-cantly higher (H) or lower (L) than expected. So, for example, during web-based session 1, teacher explaining occurred less frequently than expectedbased upon session averages. (Note: WB represents the web-based coursesand F2F the corresponding face to face courses.)

Table 3

Significant Differences in Classroom Interaction

Teacher Categories WB 1 WB 2 F2F 1 F2F 2

Accepts feelings HExplains L L HStudent CategoriesResponds H H LAsks questions HInitiates an idea HSupports others in class HSilence or confusion H H L

In general, the instructor tended to spend less time explaining in thechat room than in the classroom. In a web-based course, explanations oftentake the form of web pages and are not a typical use of the chat room. Be-cause only two samples from each group were observed, it is possible that

240

Page 9: Peas in a pod

Two Peas in a Pod?

other f2f sessions may have resulted in less time devoted to the instructorproviding explanations. The general trend, however, is that while the teachertended to explain more in the f2f section, explanations did not dominate theweb-based course discussions.

The instructor also allowed for more and longer periods of silence inthe chat room than in the classroom. This was most likely due to the expect-ant nature of chat room discussions. The instructor, without the aid of visualcontact with the students, was unable to determine whether students weresimply thinking and formulating questions and answers or whether they in-deed had nothing to add. Students also may have been waiting for anotherstudent to contribute or for the instructor to continue. It was observed, as of-ten happens in chat room discussions, that a period of silence was followedby several contributions from students popping on the screen almost simul-taneously. In a f2f setting, students and the instructor can tell exactly when amember of the class begins speaking (hopefully). The chat room discussionssmudge this demarcation into fluctuations of silence and activity.

Student responses to the instructor were higher than expected in the firstweb-based and its corresponding f2f session. It is probable that the topic thatweek generated more student interest or that the discussions were designedto elicit student responses. The second f2f session resulted in student re-sponses being much lower than expected, which is not surprising consider-ing that the teacher explaining category was higher than expected that day.The first f2f session also experienced higher student-generated questionsand ideas supporting the suggestion that that particular sample of class dis-cussion was more spirited than the norm.

An unexpected difference between the two groups occurred in the firstweb-based session where students showed support for one another to a high-er degree than expected (see Table 3). Students showed support by validat-ing other students' comments. Online chats, as opposed to speaking in frontof a class, may make students feel more comfortable thereby encouragingstudents to not only support one another more openly but also to take on amore empowered role in the class discussion.

Student Evaluations

Students in both classes completed identical course evaluations beforetheir final exam. The evaluation was one used by the department and includ-ed items that explored student perceptions of both the instructor and thecourse. Instructor items focused upon perceived teacher effectiveness (abili-ty to motivate students, to encourage students, a degree of fairness in student

241

Page 10: Peas in a pod

Mentzer, Cryan, and Teclehaimanot

treatment, availability for consultation, and a personal interest in the stu-dents). Course items included those dealing with the general organization,the value of the course as it related to their major area of study, the text-books, exams, and general assignment workload. Web-based students tookthe evaluation online and f2f students completed it in class with the instruc-tor absent. All evaluations were anonymous.

The results of the t-test showed that students in the f2f class rated the in-structor and the course significantly higher than those students in the web-based course (with p < 0.001). Mean evaluation scores for the f2f and web-based classes were 1.22 and 1.82 respectively on a 5 point scale where a "I"indicated the highest ranking (outstanding) and a "5" the lowest (poor). So,in both cases the instructor received very good scores; yet the students in thef2f course believed the quality of the instructor and the course to be betterthan those in the web-based course. T-tests were then conducted on individ-ual questions to locate where the classes differed significantly. The alphalevel was lowered to 0.002 to control for Type I comparison error rate (al-pha, 0.05, divided by 22 items) and the analysis revealed statistically signifi-cant differences on each of the 22 questions. Mean scores for evaluationitems in the f2f course ranged from an outstanding rating of 1.04 ("demon-strated a sincere interest in the subject") to a very high rating of 1.50("promptness in returning graded assignments"). The web-based meanscores ranged from the best rating of 1.47 ("demonstrated comprehensiveknowledge of the subject") to a low rating on two items of 2.40 ("prompt-ness in returning graded assignments" and "offered assistance to studentswith problems connected to the course"). All of this hints at extra informa-tion students might collect and process concerning an instructor based upondirect observation occurring in the f2f setting but absent in the web-basedvenue. For example, in the web-based course, students have limited accessto instructor interaction with other students. A student in the web-basedclass will not ask a question about personal difficulties with the course in thechat room but rather will use e-mail. However, it is common for students toask questions of this type before, during, and after an f2f class where otherstudents can observe the exchange. It is logical, therefore, that an instructormight receive a lower rating on an item such as offering assistance to stu-dents with problems connected to the course in a web-based course wherethis quality is less evident.

To examine the differences further, effect sizes were scrutinized andthose that exceeded 0.75 were considered to indicate a large difference be-tween the groups based upon categories established by Cohen (1962). Table4 provides a list of the items from the evaluation that exhibited large effect

242

Page 11: Peas in a pod

Two Peas in a Pod?

sizes. The effect size was calculated by subtracting the web-based meanscore from the f2f mean score and dividing the result by the pooled standarddeviation.

Table 4Student Evaluation Items with Large Effect Sizes

Item Effect Size

Offered assistance to students with problems connected with course 1.34What grade would you assign the instructor (a, b, c, d, or f) 1.14Demonstrated promptness in returning graded assignments and exams 1.08Meaningful class preparation 1.01Demonstrated sincere interest in the subject 0.98Expected grade in this course 0.96Personal interest and sensitivity to student problems 0.94Availability for consultation 0.92Demonstrated respect for students 0.91Demonstrated fairness and reasonableness in evaluating students 0.86Demonstrated ability to explain course material 0.76Encouraged independent thought by students 0.75

It is important to analyze the results in the correct light. Overall, theweb-based students gave the instructor a high rating and the f2f studentsgave him a stellar rating. In neither case did the students indicate a negativeexperience but rather a slightly less positive experience. Interesting compari-sons indicated that the students in the f2f course expected an average gradeof A- while those in the web-based course expected a B-. As far as gradingthe instructor, f2f students assigned an average grade of A and the web-based students assigned a grade of B+. There have been many studies con-ducted showing the high correlation between student expected grade andstudent evaluation of the instructor. To determine whether students in onesection of the course actually did perform better than those in the other,exam grades and overall grades were compared.

Three indicators of student success were examined-(a) midterm exam-ination, (b) final examination, and (c) overall points earned for the semester(included other assignments). Before making comparisons, an F test for ho-mogeneity of variance was calculated and results showed that the varianceswere unequal for the midterm and the overall grade. Appropriate tests ofmean scores based upon variance issues were then performed. Of the threecomparisons, only the mean score for overall grade differed at a statisticallysignificant level (p = 0.02). Students in the f2f course averaged an A- and

243

Page 12: Peas in a pod

Mentzer, Cryan, and Teclehaimanot

those in the web-based course averaged a B. It is interesting that studentsappeared to predict their final grade with accuracy indicating that the grad-ing process for both sections was clear-cut in the minds of the students. Themain difference between the tests considered in the comparison and theoverall points earned for the semester were other assignments requiredthroughout the semester. A closer look at student records for the two sec-tions revealed that students in the web-based course did not earn lowergrades on these assignments but merely failed to submit some of them sug-gesting that learning outcomes were similar but that the personal contact of anf2f course positively influenced and motivated students to turn in assignments.

CONCLUSIONS/RECOMMENDATIONS

General findings of this study showed that two equivalent groups, ran-domly assigned to either an f2f or web-based course, do not have equal ex-periences in the area of student perceptions. Learning outcomes can be con-sidered to be equal based upon test scores. Because the instructor was thesame for both courses, it can be concluded that the course delivery may havesome effect on the variables examined. The interaction analysis showed thatthe instructor tended to explain less in group discussions in the web-basedcourse. Because only two pairs of discussion sessions were scrutinized, find-ings in other areas of interaction, and especially student interaction, may notgeneralize. Student evaluations of the course and the instructor also differed.Students in the web-based course tended to rate both the course and the in-structor lower than students in the f2f course-although ratings for bothgroups were considered to be above average. Finally, student achievementdiffered only in the area of completing course assignments. Test scoresshowed no statistically significant difference indicating that student masterylevels were essentially the same; yet students in the web-based course weremore likely to omit submitting one or more assignments. Students in theweb-based course may be less conscientious or less motivated to completeassignments.

Limitations of this study include a small sample size and a restrictedpopulation. What occurs in an Early Childhood Education course may bedifferent from what occurs in other content areas. It is recommended that fu-ture research apply this model to other content areas and that more researchbe used to explore the specific differences in course delivery methods thataccount for student perceptions. As noted earlier, many studies have shownweb-based courses to be as effective as the traditionally delivered course.

244

Page 13: Peas in a pod

Two Peas in a Pod?

However, the majority of these studies were nonexperimental using existinggroups for control and experimental. It is suggested that some of the differ-ences found between the f2f and web-based groups in this study were in factdue to the random assignment of students to the groups. Students who maynot be familiar or comfortable with web-based courses were in the experi-mental group, which often does not occur when existing sections are used.Their perceptions and experiences, therefore, were more indicative of that ofthe "average" student as opposed to those students who generally enroll inweb-based courses.

References

Cohen, J. (1962). The statistical power of abnormal-social psychologicalresearch: A review. Journal of Abnormal and Social Psychology, 65,145-153.

Daughenbaugh, R., Daughenbaugh, L., Surry, D., & Islam, M. (2002). Per-sonality type and online versus in-class course satisfaction. EducauseQuarterly, 25(3), 71-72.

Diaz, D., & Cartnal, R. (1999). Students' learning styles in two classes:Online distance learning and equivalent on-campus. College-Teaching,47(4), 130-135.

Flanders, N.A. (1970). Analyzing Teaching Behavior. Reading, MA: Addi-son-Wesley.

Fleming, N.D., & Bonwell, C.C. (2002). VARK: A guide to learning styles.Retrieved September 9, 2003 from http://www.vark-learn.com/english/index.asp

Meyer, K. (2003). The web's impact on student learning. T.H.E. Journal,30(10), 14-24.

Miller, M.D., Rainer Jr., P.K., Corley, J.K. (2003). Predictors of engage-ment and participation in an on-line course. Online Journal of Dis-tance Learning Administration, 6(1). Retrieved January 3, 2007 fromhttp://www.westga.edu/-distance/ojdla/spring6 I/miller61 .htm

Office of Institutional Planning and Research, (2000). Does distance learn-ing make a difference? A matched pairs study of persistence and per-

formance between students using traditional and nontraditional coursedelivery study modes. Sinclair Community College, Dayton OH. (ERICDocument Reproduction Service No. ED477199)

Parkinson, D., Greene, W., Kim, Y., & Marioni, J. (2003). Emergingthemes of student satisfaction in a traditional course and a blended dis-tance course. TechTrends, 47(4), 22-28.

Rivera, J.C., McAlister, M.K., & Rice, M.L. (2002). Comparison of studentoutcomes & satisfaction between traditional & web-based course offer-ings. Online Journal of Distance Learning Education Administration,

245

Page 14: Peas in a pod

246 Mentzer, Cryan, and Teclehaimanot

5(3). Retrieved December 3, 2003 from http://www.westga.edu/%7Edistance/ojdla/faIl53/falI53.html

Schott, M., Chernish, W., Dooley, K.E., & Lindar, J.R. (2003). Innovationsin distance learning program development and delivery. Online Jour-nal of Distance Learning Administration, 5(2). Retrieved September 9, 2003from http://www.westga.edu/%7Edistance/ojdla/summer62/schott62.html

Page 15: Peas in a pod

COPYRIGHT INFORMATION

TITLE: Two Peas in a Pod? A Comparison of Face-to-Face andWeb-Based Classroom

SOURCE: Journal of Technology and Teacher Education 15 no22007

PAGE(S): 233-46

The magazine publisher is the copyright holder of this article and itis reproduced with permission. Further reproduction of this article inviolation of the copyright is prohibited.