Upload
magna-publications
View
215
Download
1
Embed Size (px)
DESCRIPTION
The Teaching Professor provides ideas, insights and best pedagogical practices to educators who are passionate about teaching.
Citation preview
By Gary R. Hafer, Lycoming College, PA,[email protected]
The final portfolio of student work (beit writings, drawings, or a collection
of different kinds of work) presents theinstructor with a conundrum. As the cul-mination of student work, it needs to besubmitted at the end of the course, butfeedback opportunities then are severelylimited. Those of us who use portfolioassignments do provide feedback at mul-tiple points throughout the semester, butwhen the portfolio is completed, thecourse has ended and this final versioncannot be discussed with students. Worsethan that, for years, I cringed as I saw thegraded portfolios accumulate outside myoffice. Some were never picked up.Interested in a better alternative, I ini-
tiated “the final hour,” an open officehour for any student interested in con-versing about his/her graded portfolio.The procedure is straightforward. Aswith my previous practice, students haveuntil Monday noon during final examina-tion week to submit their portfolios. I’veseen the original and revised pieces in theportfolios throughout the semester andduring a “trial run” conference where Igive them a ballpark grade of where theportfolio is presently situated. Thisenables me to read the final productquickly, usually finishing by Tuesdayevening, after which I send out an e-mailwith a grade report. In the e-mail header,I announce first: “Questions? Discussion?Complaints? FINAL OPEN OFFICEHOURS, Wednesday 10-12.” The e-mailnote contains all the details and the finalgrade, although I typically don’t submitfinal grades to the registrar until afterthat conference time; I’m open to stu-
dents’ input.Final conference attendance varies,
and so do the reasons why studentsdecide to drop by. Some want to chat, justlike they do with me before class starts.Some others want to see what I liked,delighted that their final grade is higherthan they expected. Still others solicitempathy; I listen to them reason throughtheir disappointment, which helps me tounderstand the decisions they made—ordid not make—in revision. They tell methis time is comforting to them too. Onestudent just wanted to tell me “how hardit was to even earn a D.” I find there arelearning opportunities during this lastconference as students and I make ourway through their portfolios and I sharemy reactions to them. The final conference also helps me. It
makes me a more careful final graderbecause, whether a student attends thefinal office hour or not, I may have to facehim or her and defend my decision. Thatinfluence is not debilitating; rather, it ismightily persuasive in keeping me cen-tered on making my evaluation “honest.”As Peter Elbow notes in his bookEveryone Can Write (p. 357), the high-stakes response is a “critical” one that “ismore likely to misfire or do harm becauseof how it is received—even if it issound…” The final office hour gives mean opportunity to listen and to see howthat graded message is received—a rareopportunity to hear a student’s side afterthe final portfolio is graded. The studentcontrols the final hour with questions andcomplaints, all of which I respond to. Idiscover, however, that I do far more lis-tening than talking. The final hour also provides a space
for quick resolution. Without it, grade
debate can linger on. One semester I hada student and his father debating whetherto appeal the final portfolio grade, whichfor the student meant the final coursegrade; the e-mail discussions went backand forth between the freshman dean andthe student’s parent, with me as thebystander, supplying information andcommentary along the way only to thedean. It was a bizarre way to look at myown grading, defending it in the role of athird party. Since implementing the finalhour, I’ve avoided such scenarios.Although I’m responsible for the aca-
demic integrity of the course, I alsounderstand that I need to keep communi-cation open, even after students have fin-ished the course. Therefore, I’m notaverse to changing a grade as a result ofthe final conference. Yet, I never have andno student has asked me to do so.Instead, that final hour provides some-
In This Issue
Volume 26, Number 3 March 2012
Exploring the Impact of InstitutionalPolicies on Teaching . . . . . . . . . . . . . . . . . .2
Active Learning: Changed Attitudes andImproved Performance . . . . . . . . . . . . . . . .3
Assessing Critical Thinking Skills . . . . . . .4
Cell Phones in Class: A Student Survey . .5
Too Much Focus on Facts? . . . . . . . . . . . .6
What Classes and Small Groups Have inCommon . . . . . . . . . . . . . . . . . . . . . . . . . .6
Online or In Class? . . . . . . . . . . . . . . . . . .7
Millennial Students: They Aren’t All theSame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8
A MAGNA PUBLICATION
The Final (Office) Hours
PAGE 3 �
Here are three questions of interest tothose of us concerned with institu-
tional support of teaching: 1) Is thestrength of an institution’s “culture ofteaching” or policy support for teachingand learning reflected in faculty mem-bers’ pedagogical practices? 2) Are “cul-tures of teaching” more prevalent at insti-tutions with “learner centered” policies?3) Do the relationships between institu-tional policies, faculty cultures, andteaching practices differ across institu-tional types?Those questions were addressed in a
recent study. Definitions of key termshelp in understanding the findings. A“teaching culture” involves a “sharedcommitment to teaching excellence andmeaningful assessment of teaching.” (p.809) The larger goal of this inquiry wasto determine whether institutional poli-cies can be used to create cultures forteaching on a campus and then whetherthose cultures might encourage facultyto use effective pedagogical practices. Tothat end, they considered 18 differentpolicies supportive of teaching andlearning experiences for first-year stu-dents. For example, are senior faculty(associate and full professors) required toteach first-year seminars? Do senior fac-ulty teach other first-year courses?Beyond student ratings, does the institu-tion assess the effectiveness of first-yearcourses? Are learning community oppor-tunities offered to first-year students? Asfor effective pedagogical practices,researchers considered two in the study:whether teachers provided first-year stu-dents with opportunities to learn aboutpeople with different background char-acteristics or different attitudes and val-ues, and the extent of informal interac-tion faculty had with students outside ofclass. Study results are based on data col-lected from 5,612 faculty members (at allranks) at 45 different institutions.The researchers conclude the follow-
ing about findings related to the firstquestion: “Scant evidence suggests thatinstitutional policies in support of teach-
ing and learning are directly related tofaculty members’ teaching practices.” (p.819) Were “cultures for teaching” moreprevalent at institutions with learning-centered polices? “There appears no clearpattern indicating a relationship betweeninstitutional policy and faculty percep-tions.” (p. 819) Rather familiar institu-tional characteristics, such as theCarnegie classification of institutionaltype and institutional size, explainedmore than 80 percent of the variance ininstitutional cultures of teaching andlearning. As for whether relationshipsbetween policies, cultures, and teachingpractices differed across institutionaltypes, the answer was yes, particularlybetween doctoral-granting universitiesand other types of institutions in thesample.Here’s the overall conclusion:
“Perhaps the most salient and consistentfinding from this analysis is that institu-tional-level policies have no more than atrivial relationship, either directly orindirectly through their influence on fac-ulty culture, with the teaching practicesemployed by an institution’s faculty.Instead, traditional institutional descrip-tors, including size, selectivity, and con-trol—but especially Carnegie classifica-tion, are consistent predictors of bothfaculty practices and culture.” (p. 822)It is important to note that this
research looked at a sample of policiessupportive of teaching and learning, andit considered two (out of many) charac-teristics of effective teaching. Even so,the results give some indication of howdifficult it is to change institutional cul-tures. Policy changes supportive ofteaching and learning face the strongheadwinds of tradition and facultyautonomy.
Reference: Cox, B. E., McIntosh, K. L.,Reason, R. D., and Terenzini, P. T.(2011). A culture of teaching: Policy,perception, and practice in higher educa-tion. Research in Higher Education, 52(8), 808-829.
March 2012 The Teaching Professor
2
Exploring the Impact of InstitutionalPolicies on Teaching
Editor-at-LargeMaryellen Weimer, Ph.D. E-mail: [email protected]
EditorRob Kelly
PresidentWilliam Haight
PublisherDavid Burns
For subscription information, contact:Customer Service: 800-433-0499E-mail: [email protected]: www.magnapubs.com
Submissions to The Teaching Professor are welcome. When submitting, please keep theseguidelines in mind:• We are interested in a wide range of teaching-learning topics.
• We are interested in innovative strategies,techniques, and approaches that facilitatelearning and in reflective analyses of educational issues of concern.
• Write with the understanding that youraudience includes faculty in a wide varietyof disciplines and in a number of differentinstitutional settings; i.e., what you describemust be relevant to a significant proportionof our audience.
• Write directly to the audience, remember-ing that this is a newsLETTER.
• Keep the article short; generally between 2and 3 double-spaced pages.
• If you’d like some initial feedback on atopic you’re considering, you’re welcome toshare it electronically with the editor.
The Teaching Professor (ISSN 0892-2209) is pub-lished 10 times per year by Magna Publications Inc.,2718 Dryden Drive, Madison, WI 53704. Phone 800-433-0499; Fax: 608-246-3597. Email: [email protected]. Website: www.magnapubs.com. One-year subscription: $89 (Multiple print subscrip-tions and Group Online Subscriptions are available,call Customer Service at 800-433-0499 for informa-tion.) Photocopying or other reproduction in wholeor in part without written permission is prohibited.POSTMASTER: Send change of address to TheTeaching Professor, 2718 Dryden Drive, Madison, WI53704. Copyright ©2012, Magna Publications Inc.
Authorization to photocopy items for internal orpersonal use of specific clients is granted by TheTeaching Professor for users registered with theCopyright Clearance Center (CCC) TransactionalReporting Service, provided that $1.00 per page ispaid directly to CCC, 222 Rosewood Drive,Danvers, MA 01923; Phone 978-750-8400;www.copyright.com. For those organizations thathave been granted a photocopy license by CCC, aseparate system of payment has been arranged.
Too often, active learning activities areisolated events in a course. They hap-
pen every now and then but aren’t a regu-lar part of the course. The intermittent useof active learning raises the question ofhow much is needed to accrue gains inlearning outcomes, like higher examscores and course grades.In reviewing the research on active
learning in statistics, the authors of thearticle cited below, who are statistics fac-ulty themselves, found some research inwhich certain active learning experiencesdid not produce measurable gains onexam performance. They “suspect the keycomponents of successful active learningapproaches are using activities to explainconcepts and requiring students todemonstrate that they understand theseconcepts by having them answer veryspecific rather than general questions.” (p.3) To that end, they designed an intro-
ductory behavioral/social science statis-tics course using what they describe as a“workbook curriculum.” Students read ashort chapter (five single-spaced pages)introducing a topic. After reading, stu-dents answered questions, completed aproblem, and summarized the results oftheir computation. Then they submittedthis homework assignment online beforeclass and got feedback on their work, alsobefore class. These homework assign-ments counted for 17 percent of theircourse grade.In class, the instructor began by
answering questions about the homeworkand followed that with a brief lecture dur-ing which information in the reading wasreviewed. Typically this consumed 15 to20 minutes of the 75-minute period.Then students completed a “workbook”activity. “As students worked througheach subsection, they answered increas-ingly complex conceptual and/or compu-tational questions” (p. 6). They couldaccess answers while they worked. Theinstructor was also available to answerquestions. Students were encouraged butnot required to work with a partner. The
instructor ended the period with anothershort lecture summarizing the contentpresented in the workbook activity.Workbook answers were not graded.Grades were based on the homeworkassignments, four exams, and a final.Basically, every day in class was structuredthis way.To study the effects of students’ expo-
sure to this kind of active learning experi-ence, the faculty researchers looked atstudent attitudes toward statistics. Theymeasured these with an already developedinstrument, Survey of Attitudes TowardsStatistics (SATS), which contains 36items and six subscales, including thesethree examples: one measuring studentfeelings toward statistics (the affect sub-scale), another measuring student beliefsabout their ability to understand statistics(the cognitive competence subscale), andone measuring student beliefs about theusefulness of statistics in their lives (thevalue subscale). The 59 students whoexperienced the workbook curriculumcompleted this survey before and at theend of the course. The researchers alsolooked at the effects of this course designon exam scores and final course grades.The attitudes and performance of stu-
dents in the experimental group werecompared with the attitudes and perfor-mance of 235 students in 20 other sec-tions of courses similar to this one. Allwere general education courses that ful-filled quantitative requirements. Allenrolled 30 or fewer students andrequired a prerequisite course in algebra.The results confirmed the value of
extensive active learning experiences in acourse. “Our sections reported liking sta-tistics significantly more than the com-parison group (i.e., more positive affectscores). Our students also reported signif-icantly higher statistical cognitive compe-tence (i.e., confidence in their ability tounderstand and perform statistical proce-dures) than the comparison group. Whilestudents in our sections thought statisticswas harder than the comparison groupthey also liked statistics more than the
comparison group.” (p. 9)“We suspect that most statistics
instructors would want their students toreport they like and understand statistics;however, we also suspect that mostinstructors are more concerned with theirstudents’ actual ability to perform andunderstand statistics.” (p. 9) And theirresults did show that those more positiveattitudes were positively associated withperformance on the course’s comprehen-sive final.The instructors also felt their teaching
benefited from the approach. They wereable to interact with individual studentsmore often. They found themselves usingstudent names more often, answeringquestions more frequently, and offeringmore feedback to individual students.They did find some student questionschallenging. “Instructors must be com-fortable ‘thinking on their feet.’ For ourpart, we found the unpredictability ofstudents’ questions to be invigorating. Wehad become bored with teaching statisticsbut when we changed to the workbookapproach, we were again excited aboutteaching the course.” (p. 13)
Reference: Carlson, K. A. and Winquist,J. R. (2011). Evaluating an active learningapproach to teaching introductory statis-tics: A classroom workbook approach.Journal of Statistics Education, 19 (1), 1-22.
The Teaching Professor March 2012
3
Active Learning: Changed Attitudes and ImprovedPerformance
thing different: an exchange and a sharedunderstanding that can come only after afinal piece of work is discussed. The worstthat has ever come out of the final hour isto have students agree to disagree, partingwithout acrimony. The stack ofunclaimed portfolios outside my office issignificantly smaller now. That reasonalone justifies the final hour opportunity.
THEFINAL (OFFICE) HOURSFROM PAGE 1
The guidelines suggested below pro-pose how critical thinking skills can
be assessed “scientifically” in psychologycourses and programs. The authorsbegin by noting something about psy-chology faculty that is true of faculty inmany other disciplines, which makesthis article relevant to a much largeraudience. “The reluctance of psycholo-gists to assess the critical thinking (CT)of their students seems particularly iron-ic given that so many endorse CT as anoutcome…” (p. 5) Their goal then is tooffer “practical guidelines for collectinghigh-quality LOA (learning outcomeassessment) data that can provide a sci-entific basis for improving CT instruc-tion.” (p. 5) The guidelines are relevantto individual courses as well as collec-tions of courses that comprise degreeprograms. Most are relevant to coursesor programs in many disciplines; othersare easily made so.Understand critical thinking as a
multidimensional construct—In theirdiscussion of critical thinking in psy-chology, these authors propose that crit-ical thinking includes skills, disposi-tions, and metacognition. Criticalthinking skills in psychology includeargument analysis and evaluation,methodological reason, statistical rea-soning, causal reasoning, and skills forfocusing and clarifying questions.Dispositions refer to “the willingness toengage in effortful thinking and thetendency to be open- and fair-mindedin evaluating claims, yet remain skepti-cal of unsubstantiated claims.” (p. 6)Metacognition means being aware ofone’s thinking and in control of it. A recent article in The Teaching
Professor highlighted the variation indefinitions for critical thinking. Theseauthors point out that critical thinkingis either thought of generically or asbeing discipline-specific. They citeresearch that critical thinking is proba-bly a combination of both. As a multi-dimensional construct, it contains some
general reasoning skills and some skillsthat are specific to the discipline. Thepoint is that if you want to assess learn-ing outcomes associated with criticalthinking, you cannot do that well with-out understanding how critical thinkingis defined in your discipline.Select important goals, objectives,
and outcomes for assessment—Whatcritical thinking skills and knowledgeshould students be able to demonstrateas a result of being in a course or pro-gram? Some faculty have learning goalsso general that they are all but impossi-ble to assess. They need further specifi-cation. If the assessment is to be scien-tific, then the goals, objectives, and out-comes must translated into specifichypotheses—ones that can be tested.Align assessment with instruction-
al focus—“Measures for assessing theimpact of instruction must be sensitiveto the changes instruction is intendedto produce.” (p. 7) If the measures aresensitive, then classroom assessmentcan be used to look at the techniquesbeing used, compare their effectivenesswith other techniques, and concludewhich are better.Take an authentic task-oriented
approach to assessment—Taking anauthentic task-oriented approachmeans using a performance to assesshow well students are completing atask. In psychology, tasks requiring crit-ical thinking include evaluating thequality of information from theInternet, analyzing and evaluatingresearch literature, using psychologicaltheory to analyze and evaluate behavior,and writing research and case reports,among others. Many of those tasks canbe used to evaluate critical thinking in avariety of fields.Use the best and most appropriate
measures—Because critical thinkinghas multiple dimensions, multiple mea-sures should be used to assess it. Theauthors point out that standardizedtests of critical thinking (the Watson-
Glaser Critical Thinking Appraisal andthe Cornell Critical Thinking Test arethe two examples referenced in this dis-cussion) are “probably better measuresof general CT skill.” (p. 9) In manycases, no standardized tests or measuresassess the specific type of critical think-ing or aspect of critical thinking beingdeveloped in a particular course. In sit-uations like this, new instruments mayneed to be developed.Conduct assessments that are sen-
sitive to changes over time—“Simplytesting seniors once in their capstonecourses is not sufficient to infer changesover time because the levels of skill andknowledge of students entering the pro-gram are unknown.” (p. 9)Assess frequently, embedding
assessment and feedback into instruc-tion—Students can be assessed toomuch, especially if the same instrumentis being used. They become sensitizedto those instruments. The authors rec-ommend a formative approach thatembeds assessment in instruction. Inthis case, the assessment provides theinstructor useful feedback and helpsstudents focus on their development ofcritical thinking. It offers them feed-back that can be used to improve theircritical thinking skills.Interpret assessment results cau-
tiously and apply the results appropri-ately—The quality of the data collectedmust be considered before decisions tochange a course or a program are made.Not considering the quality of the dataand not carefully interpreting the resultscan result in changes that do notimprove learning outcomes.
Reference: Bensley, D. A. andMurtagh, M. P. (2012). Guidelines for ascientific approach to critical thinkingassessment. Teaching of Psychology, 39(1), 5-16.
Assessing Critical Thinking Skills
The Teaching ProfessorMarch 2012
4
Cell phones in the classroom—it’s atopic that generates much conster-
nation among faculty. Are policies thatprohibit their use enforceable? Are stu-dents texting in class? If so, how many?If a student is texting, does that distractother students? Are students using theirphones to cheat? Are there any ways cellphones can be used to promote learning?The questions are many and the answersstill a long way from definitive.Most faculty have opinions about
how much cell phone use is occurring intheir classrooms, but those individualanswers need a larger context and inde-pendent verification. A recent survey of269 college students representing 21majors from 36 different courses, andequally distributed between first-yearstudents, sophomores, juniors, andseniors standing, offers this kind ofbenchmarking data. This studentcohort answered 26 questions thatinquired as to their use of cell phones aswell as their observations regarding thecell phone use of their peers.Virtually all the students (99 per-
cent) reported that they had cellphones, and 97 percent said that theyused their phones for text messaging.Another significant majority (95 per-cent) said they brought their phones toclass every day, and 91 percent reportedthat they set their phones to vibrate.Only 9 percent said that they turnedtheir phones off. As for their use of cellphones, 97 percent said they send orreceived text messages while waiting forclass to begin, and 92 percent admittedthat they had sent or received a textmessage during class. Thirty percentreported that they send and receivemessages every day in class. Virtually allthese students (97 percent) indicatedthat they had seen texting being doneby other students in the classroom.However, these students do not feel
that their instructors know that they aretexting. Almost half of them “indicatedthat it is easy to text in class without the
instructor being aware.” (p. 4) One sur-vey question asked students to completethis statement: “If college instructorsonly knew _______ about text messag-ing in the classroom, they would beshocked.” The most common studentresponse, offered by 54 percent of thestudents, was that teachers would beshocked if they knew how much textingwas occurring in class. Obviously, classsize influences the extent of texting orat least student perceptions of how easyit is to text without the teacher know-ing. Did students in this survey report
that they were using their cell phones tocheat? Ten percent did indicate thatthey had sent or received a text messageduring an exam, with 9 percent saying itwas easy to text during exams.Interestingly, 33 percent of students inthe sample chose not to answer thisquestion. The authors note, “Failure toanswer could be seen as a reflection ofthe respondents’ desire to either not riskself-incrimination, or to not reveal tofaculty that texting during an exam is apossibility.” (p. 4)Students in this cohort didn’t feel
that texting caused serious problems inthe classroom. They did understandthat the person texting is being distract-ed and maybe distracts a few studentssitting nearby, but these students werereluctant to support a policy that forbidsthe use of cell phones. More than 64percent believe students should beallowed to keep their cell phones on aslong as they are placed on vibrate. Lessthan 1 percent said that cell phonesshould not be permitted in the class-room under any circumstances. Aboutone-third reported that it was easier to
text in a class if the professor had nopolicy against cell phones or appearedto be laid-back and relaxed about theiruse.When asked about cell phone poli-
cies that work, students didn’t offermuch in the way of concrete suggestionsbeyond being able to use them as longas they didn’t disturb others. Facultypolicies described in the article includeconfiscating any phone that rings orphones that are being used for texting.Some professors answer phones thatring in class. If a student is observedtexting, some professors count that stu-dent as absent for the day.Given the pervasiveness of cell
phones and the acceptability of their usealmost anywhere these days, it’s difficultto imagine successfully enforcingalmost any policy in the classroom andstill having time left to teach. This arti-cle includes an appendix that containsthe questions used in the survey. Theuse of cell phones and texting in yourclasses could be sensibly addressed byasking your students to respond to thesequestions. That way, you’d know for surehow much texting is happening andyou’d have something concrete on thetopic to discuss with students. The arti-cle also contains references to severalstudies documenting how texting inter-feres with and compromises learning.
Reference: Tindell, D. R. andBohlander, R. W. (2012). The use andabuse of cell phones and text messagingin the classroom: A survey of collegestudents. College Teaching, 60 (1), 1-9.
Cell Phones in Class: A Student Survey
March 2012The Teaching Professor
5
The Teaching Professor2011 Index now available online at:
www.magnapubs.com/files/2011tpindex.pdf
The content of many courses is toofocused on the facts—those details
that students memorize, use to answertest questions, and then promptly forget.That criticism has been levied againstmany introductory college-level courses,especially by those of us who think facul-ty are too focused on covering content.But is it a fair criticism? Do introductorycourses ignore the higher- level thinkingskills, like those identified on the Bloomtaxonomy? Is the evidence empirical oranecdotal?There isn’t much empirical evi-
dence—that’s what a group ofresearchers discovered in their review ofthe literature. They decided to undertakean analysis of introductory biology cours-es to see whether or not evidence sup-
portive of the criticism existed. Here arethe three research questions they aimedto answer: 1) “What is the mean cogni-tive level faculty routinely target in intro-ductory undergraduate biology, as evi-denced on course syllabi and assess-ments?” 2) “Did faculty align their coursegoals and assessments to determine thedegree to which students achieved thestated goals?” and 3) “What factors—class size, institution type, or articulatingobjectives on the course syllabus—pre-dict the cognitive level of assessmentitems used on exams?” (p. 436)They collected sample syllabi from 50
faculty who taught 77 different introduc-tory biology courses, about half of whichwere general biology courses. Theytaught at a wide range of different public
and private institutions. The teachingexperience of the faculty cohort rangedfrom three to 36 years, and the size of theclasses they taught ranged from 14 stu-dents to almost 500 students, with amean class size of 192.They looked at goals stated on the syl-
labi and categorized them using theBloom taxonomy. They also analyzedwhat they called “high-stakes courseassessments,” meaning quizzes andexams that accounted for 60–80 percentof the course grade. “These data provideevidence of what faculty consider impor-tant in the course. Goals stated in syllabireflect faculty priorities about what theyexpect students to know and be able to
March 2012 The Teaching Professor
6
Too Much Focus on Facts?
PAGE 7 �
I’ve been collecting good articles onteaching and learning since the early’80s. In the process of looking for a par-ticular article, I regularly stumble ontoothers whose contents I remember whenI see them but have otherwise forgotten.I ran into just such an article recently.
It’s old, published in 1986, but it was thefirst article I remember reading wherethe content of the discipline was used toexplain certain instructional dynamics.Billson applies the principles of small
group dynamics as they are studied andunderstood in sociology to what happensin the classroom. And she does so for thisreason: “Deeper awareness of small groupprocesses can enhance the teachingeffectiveness of college faculty throughimproving their ability to raise studentparticipation levels, increase individualand group motivation, stimulate enthusi-asm, and facilitate communication in theclassroom.” (p. 143) So what principlesof small group dynamics might help usbetter understand what’s happening inour classrooms? Billson identifies and
discusses 15—four are highlighted here.Principle 1: Every participant in a
group is responsible for the outcome ofthe group interaction. Billson acknowl-edges that the major responsibility doesbelong to the professor, but she main-tains that students share a “significantresponsibility” as well. (p. 144) She rec-ommends discussing that responsibilitywith students and explores the possibili-ty of letting students plan certain seg-ments of the course.Principle 4: When people feel psy-
chologically safe in a group, their partic-ipation levels will increase. Students canbe made to feel safer when they areknown by names, when their firstattempts to contribute garner positivefeedback, and when the professor avoidssarcasm and ridicule.Principle 8: The leader of any group
serves as a model for that group. “Theway in which professors play their role,including how they present expectationsof students, carry out responsibilities, andhandle privileges implicit in the profes-
sorial role, has a profound effect on howstudents enact their role.” (p. 147)Principle 13: A group will set its own
norms of behavior and will expect con-formity to them. The same policies andprocedures can be used and yet classesrespond to them differently. Professorsneed to be aware of these norms and ifthey work against course goals, theyshould be discussed openly with stu-dents.Although “small group” isn’t a label
that feels like it fits classes with morethan 100 students, even large classesexhibit many features typical of groups.Applying these principles can result inclassroom climates where learning is amore likely outcome.
Reference: Billson, J. (1986). The collegeclassroom as a small group: Some impli-cations for teaching and learning.Teaching Sociology, 14 ( July), 143-151.
What Classes and Small Groups Have in Common?
The Teaching Professor March 2012
7
Online course offerings continue togrow. In 2006, experts (cited in the
article referenced below) were estimatingthat some 2,000 major universities andcolleges were offering online/Web-basedcourses, enrolling more than 5 millionstudents. And that was 2006. As experi-ence with online education grows, theopportunity for learning from that expe-rience grows as well. Highlighted beloware findings from a study that examinedbusiness student perceptions of college-level online courses. Using a five-point Likert-type scale,
this 800-student cohort indicatedwhether online courses were more or lessdifficult than regular classes, whetheronline courses provided poor or goodlearning experiences, and whether theywere happy or unhappy that they hadtaken an online course, among otheritems. On a second portion of the ques-tionnaire, they compared learning in tra-ditional classrooms to the amount oflearning in online courses, whether it waseasier to cheat in online courses, andwhether they thought students who
completed online coursework would havethe same job opportunities as studentswho didn’t.“Data analyses revealed that for the
most part, the students did not holdpolarized opinions regarding the onlinecourses they had completed.” (p. 243)Mean responses for the first seven itemson the questionnaire ranged from 3.05 to3.51, “indicating relatively neutral overallattitudes toward the online course expe-riences.” (p. 243) The second part of thequestionnaire identified some differentperceptions between students who hadand had not taken an online course. Forexample, students who hadn’t taken anonline course thought it would be easierto cheat in online courses than studentswho had taken one (3.19 mean for thosenot taking an online course versus 2.75for those who had taken one).Researchers were concerned about
one finding. “What is rather disquietingis the fact that approximately one-thirdof the students who had completed atleast one online course expressed nega-tive attitudes toward or negative percep-
tions of online education.” (p. 246) Theycall for more research to understand thebases for these negative attitudes andperceptions. Online courses are clearly part of
higher education’s future. With the expe-rience of offering them accumulating, it’stime to explore questions like these andothers, for example: Which coursesshould be offered online? What’s anappropriate balance between onlinecourse and in-class courses, or does itmatter? Who benefits most and leastfrom taking online courses? Should somestudents (maybe beginning students invarious at-risk categories) be advisedagainst taking online courses? Are all fac-ulty “good” online teachers?
Reference: Bristow, D., Shepherd, C.D., Humphreys, M., and Ziebell, M.(2011). To be or not to be: That isn’t thequestion! An empirical look at onlineversus traditional brick-and-mortarcourses at the university level. MarketingEducation Review, 21 (3), 241-250.
Online or In Class?
do; assessments reflect how faculty evalu-ate students’ achievement of those learn-ing goals.” (p. 436)The findings are breathtaking—at
least they took away this editor’s breath.“Of the 9,713 assessment items submit-ted to this study by 50 faculty teachingintroductory biology, 93% were ratedBloom’s level 1 or 2—knowledge andcomprehension. Of the remaining items,6.7% rated level 3 with less than 1% ratedlevel 4 or above.” (p. 437) And the newsabout course goals wasn’t much better. Ofthe 250 that were pulled from course syl-labi, 69 percent were at levels 1 and 2 onthe Bloom taxonomy. The level of assess-ments was not affected by class size or byinstitutional type. Students’ knowledgeand understanding of facts were what
was being assessed in virtually all thesecourses.Some may be tempted to argue that
students must begin to understand a dis-cipline by acquiring these basic facts—that it is knowledge of these facts thatenables students to do higher-levelthinking tasks. “Evidence to supportssuch claims ... is lacking.” (p. 439) Theseresearchers argue that high-level think-ing skills must be developed right alongwith a knowledge base, and they contendthat those kinds of thinking skills onlydevelop when there is opportunity topractice them.“We do not have a prescription for the
‘right’ cognitive level of goals and assess-ments in an introductory course.” (p.439) However, their findings would cer-tainly indicate that in terms of fosteringhigher-order thinking skills, the currentbalance is not “right.” “We believe that
students should begin practicing theskills of connecting, transferring, andmodeling scientific concepts at the start,not the end, of their degree programs.”This analysis focused on introductory
biology courses. Every discipline offersintroductory course work, and the normis to packed those courses with content.Does that content focus too much on thefactual details? That’s a question everydiscipline ought to be exploring, and thisstudy provides a great model of how thatanalysis can be undertaken.
Reference: Momsen, J. L., Long, T.L., Wyse, S. A., and Ebert-May, D.(2010). Just the facts? Introductoryundergraduate biology course focus onlow-level cognitive skills. Cell BiologyEducation—Life Sciences Education, 9(Winter), 435-440.
FOCUSONFACTSFROM PAGE 6
March 2012 The Teaching Professor
8
“A disservice is done to any studentcohort when they are globally definedby a single set of character traits. Withinany generation, there is diversity and inthe Millennial Generation, there is con-siderable diversity in background, per-sonality and learning style.” (p. 223) Soconcludes a lengthy and detailed articlethat seeks, among other goals, to “demys-tify” the characteristics commonly attrib-uted to students belonging to this gener-ation. “Analysis of research data suggeststhat these students may not be as differ-ent from other generations in the funda-mental process of learning as is regularlyproposed.” (p. 215) These authors believethat’s important because “it is crucial toaccurately assess which specific ‘stablecharacteristics’ truly impact the learningprocess and should be targeted for con-sideration in instructional design.” (p.215)They are critical of much of the evi-
dence being used to support both posi-tive and negative characteristics associ-ated with Millennial learners. “Over thelast decade, as the literature on theMillennial student has proliferated, ithas proven that opinions beget opinions.A scrutiny of the references of a majori-ty of publications and presentationsindicates that the ideas being espousedare fundamentally opinions based onobservation and perception as well as onstudent personal satisfaction and prefer-ence surveys rather than on evidence-based research methodologies.” (pp.215-216) They point out that many of the sur-
veys documenting a set of Millennialstudent characteristics have been done atone or two institutions with populationsnot always representative of the largerstudent population. The Millennialcohort includes students from variousraces, religions, ethnicities, and socioe-conomic backgrounds.Among the Millennial student char-
acteristics challenged by these authors istheir need for the digital delivery of con-
tent. The authors cite multiple studiesdocumenting “that a spectrum for boththe desire and ability to use digital learn-ing tools exists.” (p. 216) Based on theirreview of this literature, they conclude,“More careful evaluation of the purposeof technology in learning with regard toactual student needs, desires, and profes-sional applications should be undertakenbefore additional time, money and
resources are invested in more extensivetechnologies.” (p. 216)Millennial students are thought to be
multitaskers. They may be, but only asmall percentage perform multiple taskswith no loss in efficiency. One studycited identifies a population of “super-taskers” who were able to multitask, butthey were only a bit more than 2 percentof the population studied. The other 97percent were less efficient at one or bothof the tasks they attempted to performsimultaneously.Some characteristics associated with
Millennial learners are verified byempirical research. Critical thinkingskills are a good example. “Millennialshave grown up with astonishing expo-sure to unvetted internet resourcesexemplified by Wikipedia and YouTube.The predilection for Millennial students
is to make big gains quickly and withminimal effort, which has conditionedthem to select the first or most easilyavailable information source.” (p. 218)That has eroded their critical thinkingskills. More worrisome is the fact that stu-
dents don’t appear to be developing highlevels of thinking skills in college. Theseauthors reference a 2006 survey of 400employers nationwide. Only 24 percentof that group felt that college studentshad “excellent” preparation for the work-place. Sixty-five percent said theirpreparation was adequate. Specificallyon critical thinking and problem-solvingskills, only 28 percent of the employersfelt students had “excellent” preparation,and 63 percent said preparation on thoseskills was “adequate.” The admonition to respond thought-
fully and critically to sweeping general-izations made about any generationalcohort of students is appropriate.Generalizations about Millennial stu-dents can become stereotypes that rein-force erroneous assumptions about indi-viduals and groups of them in courses.As these authors note, “Educatorsshould encourage curricular change thatwill positively impact the learningprocess in a way that will be meaningfulnot just for a single generation but willhave fundamental application for abroad spectrum of learners.” (p. 223)
Reference: DiLullo, C., McGee, P., andKriebel, R. M. (2011). Demystifying themillennial student: A reassessment inmeasures of character and engagementin professional education. AnatomicalSciences Education, ( July/August), 214-226.
Millennial Students: They Aren’t All the Same
“Educators should encourage
curricular change that will
positively impact the learning
process in a way that will be
meaningful not just for a
single generation but will have
fundamental application for a
broad spectrum of learners.”