179
ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION-MAKING SKILLS BY PROMOTING REVISION OF THEIR DECISION-MAKING PROCESS IN CASE-BASED LEARNING by HYOJIN PARK (Under the Direction of Ikseon Choi) ABSTRACT Despite the importance of clinical decision-making, recent veterinary graduates felt that their clinical decision-making skills were so minimal that they were unable to complete their work independently. In order to enhance veterinary students’ clinical decision-making skills, two instructional supportsa case-based online learning module and scaffolded revision activitieswere implemented based on a case-based learning model proposed by Choi and his colleagues (2013) and findings from other research. To elaborate, the case-based online learning module was utilized to enhance veterinary students’ knowledge application by providing realistic context, and scaffolded revision activities were utilized to promote reflective thinking by providing students with an opportunity to compare their opinions to those of experts and/or peers. Forty-seven out of one-hundred-two veterinary junior students who enrolled in a small animal digestive disease course participated in this study. The participants were allowed to self- select between three scaffolded revision activity groups: expert commentary only, expert commentary as well as early peer feedback, and expert commentary as well as later peer

ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

Page 1: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION-MAKING SKILLS BY

PROMOTING REVISION OF THEIR DECISION-MAKING PROCESS IN CASE-BASED

LEARNING

by

HYOJIN PARK

(Under the Direction of Ikseon Choi)

ABSTRACT

Despite the importance of clinical decision-making, recent veterinary graduates felt that

their clinical decision-making skills were so minimal that they were unable to complete their

work independently. In order to enhance veterinary students’ clinical decision-making skills,

two instructional supports—a case-based online learning module and scaffolded revision

activities—were implemented based on a case-based learning model proposed by Choi and his

colleagues (2013) and findings from other research. To elaborate, the case-based online learning

module was utilized to enhance veterinary students’ knowledge application by providing realistic

context, and scaffolded revision activities were utilized to promote reflective thinking by

providing students with an opportunity to compare their opinions to those of experts and/or

peers.

Forty-seven out of one-hundred-two veterinary junior students who enrolled in a small

animal digestive disease course participated in this study. The participants were allowed to self-

select between three scaffolded revision activity groups: expert commentary only, expert

commentary as well as early peer feedback, and expert commentary as well as later peer

Page 2: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

feedback. Quantitative data and qualitative data were collected from students’ initial and revised

clinical decisions, a transfer decision-making test, an online survey, and face-to-face interviews.

The quantitative results indicated that the scaffolded activity with expert commentary

was helpful in enhancing the quality of the participants’ revised clinical decisions. However, the

peer feedback and its timing did not influence the quality of the revised clinical decisions.

Furthermore, the results of the transfer test showed that there was no statistically significant

difference among the three groups.

The qualitative results based on the online survey and face-to-face interviews provided

further insights that students could potentially benefit from the expert commentary in solidifying

their clinical knowledge and facilitating reflection upon their decision-making process.

Furthermore, the participants who received peer feedback felt that it helped them retain

knowledge better by allowing them to communicate their thoughts with peers.

INDEX WORDS: Clinical decision-making, Case-based learning, Knowledge application,

Reflection, Experts, Peer feedback

Page 3: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION-MAKING SKILLS BY

PROMOTING REVISION OF THEIR DECISION-MAKING PROCESS IN CASE-BASED

LEARNING

by

HYOJIN PARK

B.A., Ewha Womans University, Republic of Korea, 2008

M.A., Ewha Womans University, Republic of Korea, 2010

A Dissertation Submitted to the Graduate Faculty of The University of Georgia in Partial

Fulfillment of the Requirements for the Degree

DOCTOR OF PHILOSOPHY

ATHENS, GEORGIA

2016

Page 4: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

© 2016

HYOJIN PARK

All Rights Reserved

Page 5: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION-MAKING SKILLS BY

PROMOTING REVISION OF THEIR DECISION-MAKING PROCESS IN CASE-BASED

LEARNING

by

HYOJIN PARK

Major Professor: Ikseon Choi

Committee: Michael Orey

Chad Schmiedt

Electronic Version Approved:

Suzanne Barbour

Dean of the Graduate School

The University of Georgia

May 2016

Page 6: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

iv

DEDICATION

I dedicate this dissertation to my beloved family for supporting me with affections and

love.

Page 7: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

v

TABLE OF CONTENTS

Page

LIST OF TABLES ........................................................................................................................ vii

LIST OF FIGURES ..................................................................................................................... xiii

CHAPTER

1 INTRODUCTION .........................................................................................................1

Problem Statement ...................................................................................................2

Research Focus ........................................................................................................3

Research Questions ..................................................................................................5

2 LITERATURE REVIEW ..............................................................................................7

Clinical Decision-Making ........................................................................................7

Knowledge Application .........................................................................................16

Reflection ...............................................................................................................22

Conceptual Framework ..........................................................................................31

3 METHODS ..................................................................................................................35

Participants .............................................................................................................35

The Small Animal Digestive Diseases Course ......................................................37

Case-Based Online Learning Module ....................................................................37

Scaffolded Revision of the Initial Clinical Decision .............................................45

Procedures ..............................................................................................................47

Research Design.....................................................................................................49

Page 8: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

vi

Data Collection and Analysis.................................................................................52

4 RESULTS ....................................................................................................................63

RQ1. Revision effects of the scaffolded revision activities ...................................63

RQ2. Revision effects by groups ...........................................................................73

RQ3. Revision effects by groups across sessions ..................................................91

RQ4. Transfer effects ...........................................................................................109

RQ5. Student perception on the revision experiences .........................................115

5 CONCLUSION ..........................................................................................................129

Summary of the Findings .....................................................................................130

Effects of the case-based online learning module and scaffolded revision .........136

Implications of the Study .....................................................................................140

Suggestions for future research ............................................................................142

REFERENCES ............................................................................................................................144

APPENDICES

A PEER FEEDBACK GUIDELINES WITH REFLECTIVE PROMPTS ...................159

B ONLINE SURVEY ....................................................................................................163

Page 9: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

vii

LIST OF TABLES

Page

Table 2-1: Characteristics of Type 1 and Type 2 decision-making approaches ........................... 10

Table 3-1: Participants’ gender, averaged GPA, and the result of Levene’s Test ........................ 36

Table 3-2: The question prompts embedded at Decision Point 1 in the learning module ............ 43

Table 3-3: A time series design for this research .......................................................................... 52

Table 3-4: The research questions, data collection, data source, and data analysis techniques ... 53

Table 3-5: Three dimensions of the clinical decision-making skills and their rubric ................... 56

Table 3-6: Sample ideal script at Decision Point 2 “Taking Action for Doug” ........................... 58

Table 3-7: Inter-rater reliabilities over the Decision Points .......................................................... 59

Table 4-1: Data collection for the quality of student decision-making across groups in three

sessions ............................................................................................................................. 63

Table 4-2: Data used to test the revision effect (Research Question 1) ........................................ 65

Table 4-3: Descriptive statistics on the quality of the initial and revised case assessment .......... 66

Table 4-4: Summary of repeated-measures ANOVA for the overall quality of the initial and

revised clinical decisions .................................................................................................. 67

Table 4-5: Descriptive statistics on the quality of the initial and revised case assessment .......... 68

Table 4-6: Summary of repeated-measures ANOVA for the qualities of the initial and revised

case assessment ................................................................................................................. 69

Table 4-7: Descriptive statistics on the quality of the initial and revised prioritization of issues

and objectives.................................................................................................................... 70

Page 10: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

viii

Table 4-8: Summary of repeated-measures ANOVA for the qualities of the initial and revised

prioritization of issues and objectives ............................................................................... 71

Table 4-9: Descriptive statistics on the quality of the initial and revised plan of an immediate

action ................................................................................................................................. 72

Table 4-10: Summary of repeated-measures ANOVA for the quality of the initial and revised

plan of an immediate action .............................................................................................. 72

Table 4-11: Data used to test the two-way interaction effect of revision and group (Research

Question 2) ........................................................................................................................ 75

Table 4-12: Descriptive statistics on the overall quality of the initial and revised clinical

decisions among EC/NP, EC/EP, and EC/LP ................................................................... 76

Table 4-13: Summary of repeated-measures ANOVA for the overall quality of the initial and

revised clinical decisions in EC/NP, EC/EP, and EC/LP ................................................. 77

Table 4-14: Descriptive statistics on the quality of the initial and revised case assessment among

EC/NP, EC/EP, and EC/LP ............................................................................................... 78

Table 4-15: Descriptive statistics on the quality of the initial and revised prioritization among the

EC/NP, EC/EP, and EC/LP ............................................................................................... 79

Table 4-16: Descriptive statistics on the quality of the initial and revised plan among EC/NP,

EC/EP, and EC/LP ............................................................................................................ 80

Table 4-17: Descriptive statistics of the overall quality of the initial and revised clinical decisions

between peer feedback (EC/EP and EC/LP) and no peer feedback groups (EC/NP) ....... 82

Table 4-18: Summary of repeated-measures ANOVA on the overall quality of the initial and

revised clinical decisions in the peer feedback (EC/EP and EC/LP) and no peer feedback

groups (EC/NP) ................................................................................................................. 83

Page 11: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

ix

Table 4-19: Descriptive statistics on the quality of the initial and revised case assessment

between peer feedback (EC/EP and EC/LP) and no peer feedback groups (EC/NP) ....... 84

Table 4-20: Descriptive statistics on the quality of the initial and revised prioritization between

peer feedback (EC/EP and EC/LP) and no peer feedback groups (EC/NP) ..................... 85

Table 4-21: Descriptive statistics of the quality of the initial and revised plan between peer

feedback (EC/EP and EC/LP) and no peer feedback groups (EC/NP) ............................. 86

Table 4-22: Descriptive statistics on the overall quality of the initial and revised clinical

decisions between EC/EP and EC/LP ............................................................................... 87

Table 4-23: Summary of a repeated-measures ANOVA for the overall quality of the initial and

revised clinical decisions in the EC/EP and EC/LP groups .............................................. 88

Table 4-24: Descriptive statistics of the qualities of the initial and revised case assessment

between EC/EP and EC/LP ............................................................................................... 89

Table 4-25: Descriptive statistics on the quality of the initial and revised prioritization between

EC/EP and EC/LP ............................................................................................................. 90

Table 4-26: Descriptive statistics of the qualities of the initial and revised plan between expert

commentary with early peer feedback group (EC/EP) and expert commentary with later

peer feedback group (EC/LP) ........................................................................................... 91

Table 4-27: Data used to test the three-way interaction effect (Revision x group x session)

(Research Question 3) ....................................................................................................... 92

Table 4-28: Descriptive statistics of the overall quality of the initial and revised clinical decisions

among the EC/NP, EC/EP, and EC/LP across two sessions ............................................. 93

Table 4-29: Summary of a repeated-measures ANOVA for the overall quality of the initial and

revised clinical decisions among the EC/NP, EC/EP, and EC/LP .................................... 95

Page 12: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

x

Table 4-30: Descriptive statistics of the overall quality of the initial and revised case assessment

among EC/NP, EC/EP, and EC/LP across two sessions ................................................... 96

Table 4-31: Descriptive statistics of the overall quality of the initial and revised prioritization of

issues and objectives among the EC/NP, EC/EP, and EC/LP across two sessions .......... 97

Table 4-32: Descriptive statistics of the overall quality of the initial and revised plan of an

immediate action among EC/NP, EC/EP, and EC/LP across two sessions ...................... 98

Table 4-33: Descriptive statistics of the overall qualities of the initial and revised clinical

decisions between peer feedback (EC/EP and EC/LP) and no peer feedback groups

(EC/NP) across two sessions ............................................................................................ 99

Table 4-34: Summary of repeated-measures ANOVA for the quality of the initial and revised

case assessment in the peer feedback and no peer feedback groups ............................... 100

Table 4-35: Descriptive statistics of the quality of the initial and revised case assessment between

peer feedback (EC/EP and EC/LP) and no peer feedback groups (EC/NP) across two

sessions ........................................................................................................................... 101

Table 4-36: Descriptive statistics on the quality of the initial and revised prioritization of issues

and objectives between peer feedback (EC/EP and EC/LP) and no peer feedback groups

(EC/NP) across two sessions .......................................................................................... 102

Table 4-37: Descriptive statistics on the quality of the initial and revised plan of an immediate

action between peer feedback (EC/EP and EC/LP) and no peer feedback groups (EC)

across two sessions ......................................................................................................... 103

Table 4-38: Descriptive statistics on the overall quality of the initial and revised clinical

decisions between EC/EP and EC/LP across two sessions ............................................. 104

Page 13: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

xi

Table 4-39: Summary of repeated-measures ANOVA for the quality of the initial and revised

case assessment in EC/EP and EC/LP ............................................................................ 106

Table 4-40: Descriptive statistics on the quality of the initial and revised case assessment

between early peer feedback group and later peer feedback group across two sessions 107

Table 4-41: Descriptive statistics of the quality of the initial and revised prioritization of issues

and objectives between early peer feedback group and later peer feedback group across

two sessions .................................................................................................................... 108

Table 4-42: Descriptive statistics on the quality of the initial and revised plan of an immediate

action between early peer feedback group and later peer feedback group across two

sessions ........................................................................................................................... 109

Table 4-43: Data used to test the transferred effects of the scaffolded revision activities

(Research Question 4) ..................................................................................................... 110

Table 4-44: Descriptive statistics on the scores on the transfer test among EC/NP, EC/EP, and

EC/LP .............................................................................................................................. 111

Table 4-45: Summary of One-way ANOVA for the scores on the transfer test EC/NP, EC/EP,

and EC/LP ....................................................................................................................... 112

Table 4-46: Descriptive statistics of the scores on the transfer test between the peer feedback

groups (EC/EP and EC/LP) and no peer feedback group (EC/NP) ................................ 113

Table 4-47: Summary of One-way ANOVA for the scores on the transfer test between groups

with peer feedback (EC/EP and EC/LP) and without peer feedback (EC/NP)............... 113

Table 4-48: Descriptive statistics of the scores on the transfer test between EC/EP and EC/LP 114

Table 4-49: Summary of One-way ANOVA for the scores on the transfer test between groups

with early peer feedback (EC/EP) and with later peer feedback (EC/LP) ...................... 115

Page 14: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

xii

Table 4-50: Means and standard deviations of the items about student perceptions on the

scaffolded-revision activity with expert commentary ..................................................... 117

Table 4-51: Means and standard deviations of the items about student perceptions on the

scaffolded revision with peer feedback ........................................................................... 119

Page 15: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

xiii

LIST OF FIGURES

Page

Figure 2-1: Conceptual framework for this study to enhance veterinary students’ clinical

decision-making skills ..................................................................................................... 32

Figure 3-1: A sample screen of an initial decision-making activity in the case-based learning

environment ..................................................................................................................... 38

Figure 3-2: A sample screen of a revision activity in the case-based learning environment ........ 39

Figure 3-3: Five Decision Points in the module ........................................................................... 41

Figure 3-4: Scoring system for student decision-making responses ............................................. 56

Figure 3-5: A sample question from the final test ........................................................................ 60

Figure 4-1: The overall quality of initial and revised clinical decisions ....................................... 66

Figure 4-2: The quality of the three sub-dimensions of the initial and revised clinical decisions 73

Page 16: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

1

CHAPTER 1

INTRODUCTION

Decision-making refers to the process of making a choice between a number of options,

which requires the decision maker to handle data and algorithms to decide a best choice

(Jonassen, 2010; Patton, 1978; Smith, Higgs, & Elizabeth Ellis, 2008; Thomas, Wearing, &

Bennett, 1991). The level of certainty surrounding a decision can vary, but most real world

decisions are made under uncertainty (LeBoeuf & Shafir, 2005). Such uncertain decisions are

ambiguous in that the decision maker must estimate the likelihoods of possible outcomes, which

are not known (LeBoeuf & Shafir, 2005).

Decision-making plays a central role in everyday life as well as in many academic

disciplines including medical fields (LeBoeuf & Shafir, 2005). On top of that, veterinary

practice often involves making decisions under uncertain circumstances. To elaborate, clinical

decisions have vaguely defined or unclear goals and unstated constraints (Jonassen, 1997;

Mamede & Schmidt, 2004; Maudsley & Strivens, 2000; Orasanu & Connolly, 1993; Patton,

1978; Terry & Higgs, 1993). In addition, the decisions are often made in a world of incomplete

and imperfect resources (Jonassen, 1997; Orasanu & Connolly, 1993) while multiple players

with different roles are involved in the act of decision making (Higgs & Jones, 2008; Orasanu &

Connolly, 1993). As veterinary practice has become more and more complex, the task of

decision-making has become more demanding for veterinarians.

Page 17: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

2

Problem Statement

Despite the importance of decision-making, recent graduates felt that their clinical

decision-making skills were so minimal that as new qualified surgeons, they were unable to

complete their work independently (May, 2013; Vandeweerd et al., 2012b). Through literature

review, two major issues that veterinary students have experienced while developing decision-

making skills were identified: lack of knowledge application skills and reflective thinking skills.

In terms of the knowledge application issue, it seems that many veterinary students have

difficulties with applying their academic knowledge acquired from classroom settings to actual

real world problems (J. S. Brown, Collins, & Duguid, 1989; Gee, 1997). This issue seems to be

stemmed from the dichotomized curriculum and their goals (Maudsley & Strivens, 2000; Spiro,

Coulson, Feltovich, & Anderson, 1988). To elaborate, in introductory learning, students in

medical fields are exposed to a body of knowledge from various subject areas of biological

science and expected to establish their own knowledge structure (Maudsley & Strivens, 2000;

Spiro et al., 1988). This extensive amount of basic knowledge is often context-independent and

oversimplified due to the superficial similarities among related phenomena (J. S. Brown et al.,

1989; Spiro et al., 1988). At an advanced knowledge acquisition level, on the other hand, they

are expected to apply the basic knowledge from formal instruction to real settings (Maudsley &

Strivens, 2000; Spiro et al., 1988). Obstacles to application of the unsaturated knowledge to real

or realistic contexts would be obvious.

In addition to the knowledge application skills, reflective thinking skills are another

important variable in enhancing the quality of clinical decisions (Mamede & Schmidt, 2005;

Mamede, Schmidt, & Penaforte, 2008). Many studies have shown that experienced clinicians

put more time and effort on reflection than inexperienced clinicians (e.g., Chi, Glaser, & Rees,

Page 18: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

3

1982; Patel & Groen, 1991). Reflection refers to deliberately thinking upon a past experience

with the intent of improving aspects of said experience (Schon, 1988). Reflection is believed to

facilitate deeper learning and lessen the gaps between theory and practice by allowing users to

engage in the critical thinking processes (Forneris 2004) and subjective or objective

interpretation of their experiences (Schon, 1988). In particular, reflection plays a key role in

leading students’ successful clinical decision-making by assisting students’ planning, monitoring,

and evaluating the course of actions (Bransford, Brown, & Cocking, 2000; Lin, Hmelo, & Kinzer,

1999; Shin et al., 2003) and protecting against errors (Higgs & Jones, 2008).

Research Focus

To enhance students’ clinical decision-making skills, proper instructional supports should

be provided (Collins, Brown, & Holum, 1991; Collins, Brown, & Newman, 1987). In this study,

students were asked to perform two learning activities: making a series of clinical decisions

through the exploration of a realistic case (which is referred as initial decision-making) and

revising the decisions (which is referred to revised decision-making). The initial decision-

making activity, in particular, was proposed to promote students’ knowledge application, and the

revision activity was identified to stimulate their reflective thinking. To better guide these

decision-making activities, two instructional supports—a case-based online learning module and

scaffolded revision activities—were proposed based on Choi’s case-based learning model (Choi,

Hong, Park, & Lee, 2013; Choi, Lee, & Kang, 2009; Choi & Lee, 2009; Choi, 2009) and

findings from other research on case-based learning, reflection, and decision-making.

A case-based online learning module was designed and developed to guide the initial

decision-making activity (Choi et al., 2013, 2009; Choi & Lee, 2009; Choi, 2009). Case-based

learning, which enables students to interpret, reflect on, and apply their direct or indirect

Page 19: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

4

experiences to real or realistic contexts, is advocated as one of the promising teaching methods to

enhance students’ knowledge application skills as well as reflective thinking skills (Ertmer &

Russell, 1995). To elaborate, students are expected to lessen the gaps between theories and

practices by identifying real or realistic contexts, activating and elaborating on their context-

independent prior knowledge, and synthesizing as well as applying the knowledge to the real or

realistic contexts (Ertmer & Russell, 1995). During these activities in case-based learning,

students are also encouraged to reflect on their learning processes (Epstein, 1999; Mamede &

Schmidt, 2005).

The module employed for this study provided students with a series of realistic case

videos and critical thinking prompts (Choi et al., 2013, 2009; Choi & Lee, 2009; Choi, 2009).

Students were asked to watch the case videos, identify and analyze the problems, and make a

decision with the aid of critical thinking prompts. In order to create a realistic and educationally

valuable case, experienced clinical faculty developed a typical case of canine digestive disease

based on their past experiences. The case includes an entire cycle of clinical decision-making

activities, ranging from diagnosing a patient’s problem and announcing a treatment plan to

reacting to post treatment scenarios. The critical thinking prompts comprised of four questions

guiding each student’s decision-making process, and included the following four phases of the

decision-making process: identifying key information, assessing the case, prioritizing issues and

objectives, and making an immediate plan.

Then to support the revision of the initial decision, two scaffolding strategies for revision

were proposed and developed (J. S. Brown et al., 1989; Williams, 1992). It is believed that one

of the assuring methods of promoting individuals’ reflection is providing an opportunity for them

to compare their opinions to those of others (J. S. Brown et al., 1989; Williams, 1992). Based on

Page 20: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

5

this finding, expert commentary videos and peer feedback were proposed to encourage students

to compare their clinical decisions to those of others. In the expert commentary videos, a team

of experienced veterinary educators shared their own approaches to the realistic videos. For peer

feedback, students had an opportunity to compare their initial decision with those of their peers

and provide feedback to each other.

These expert commentary videos and peer feedback were provided between the initial

decision and revising it. The expert commentary videos were embedded in the case-based

learning module and provided to all students, while peer feedback was provided in two optional,

separate sessions. Thus, all participants were allowed to participate in one of the three

scaffolded revision activities: expert commentary with no peer feedback (EC/NP), expert

commentary with early peer feedback (EC/EP), or expert commentary with later peer feedback

(EC/LP).

Research Questions

The purpose of this study was to examine the effects of the scaffolded revision activities

on the quality of the students’ revised clinical decisions. To elaborate, the gain effects and the

near-transferred effects of the scaffolded revision activities were investigated. Students’

perceptions on the scaffolded revision activities were also explored. The research questions that

addressed in the current study are as followed:

Research Question 1. Do the scaffolded revision activities enhance the quality of students’

revised clinical decisions in case-based learning?

Research Question 2. Do the scaffolded revision activities enhance the quality of the

different groups’ revised clinical decision in case-based learning?

Page 21: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

6

Research Question 3. Are there significant differences in the quality of the initial and

revised clinical decision among groups across the two peer feedback sessions?

Research Question 4. Does the participation in a scaffolded revision activity affect

students’ transferred clinical decision-making skills?

Research Question 5. What are the students’ perceptions on the revision activities with

the case-based online learning module?

Page 22: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

7

CHAPTER 2

LITERATURE REVIEW

This chapter provides a literature review of the relevant theoretical, conceptual, and

practical research used to inform this study. In particular, the first section provides an overview

of available decision-making theories. Next, the second and third sections discuss the two

typical difficulties students have in making a good clinical decision—knowledge application and

reflective thinking. Lastly, the fourth section presents a conceptual framework to enhance

veterinary students’ clinical decision-making skills through promotion of knowledge application

and reflection.

Clinical Decision-Making

Decision-making refers to a process of making a choice between a number of options,

which requires a decision maker to handle data and algorithms to decide a best choice (Jonassen,

2010; Patton, 1978; Smith et al., 2008; Thomas et al., 1991). Decision-makers should identify

the most viable option among many to the problem under the circumstances in which the

problem occurs (Jonassen & Hung, 2008). Clinical decision-making can also be defined as a

systematic approach to choose a best course of action between alternatives in clinical settings

(Banning, 2007; Patton, 1978; Thompson & Dowding, 2002).

Clinical practice comprises a series of decision-making performances (Cockcroft, 2007;

McKenzie, 2014; Patel, Arocha, & Zhang, 2005). Take an example of a doctor who needs to

diagnose a patient’s symptoms and plan the treatment. The doctor needs to decide which

questions to ask during the history taking and which systems to focus on during the physical

Page 23: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

8

examination. Based on the data, the doctor determines which diagnostic tests to consider and

then which treatment plans to pursue. Lastly, the doctor is tasked with evaluating the outcome

and adjusting the diagnosis and/or treatment based on the patient's response to therapy or any

new information that has become available (Anene, 2013; Eddy, 1990).

Characteristics of Clinical Decision-Making

Clinical decision-making begins with a problem or a state of discrepancy which needs a

solution (Jenkins, 1985). The problem is often an uncertain, complex, difficult, and ill-defined

task with four characteristics as follows (Mamede & Schmidt, 2004; Maudsley & Strivens, 2000).

First, clinical decisions have vaguely defined or unclear goals and unstated constraints

(Jonassen, 1997; Orasanu & Connolly, 1993; Patton, 1978; Terry & Higgs, 1993). A clinician

may be driven by multiple purposes, and some of the purposes may be vague or conflict with

others (Orasanu & Connolly, 1993). For example, although a clinician would like to save a

patient using all possible treatment plans, the treatment plan may have risks for a particular

patient, or a choice of the treatment plans may be restricted due to the owners’ financial concerns.

Sometimes, organizational goals and norms may affect the decision (Higgs & Jones, 2008).

These conflicts are especially tricky in naturalistic decision-making settings, because the

dynamic situation may bring new values (Orasanu & Connolly, 1993). Since the situation may

continuously change, results with a decision can be ambiguous or sometimes risky (LeBoeuf &

Shafir, 2005).

Second, multiple players with different roles are involved in the act of decision-making

(Higgs & Jones, 2008; Orasanu & Connolly, 1993; Smith et al., 2008; Terry & Higgs, 1993).

Many of the clinical decisions involve parties, including a primary clinician, multiple specialists,

owners, agents of owners, and hospital administration (Orasanu & Connolly, 1993; Vandeweerd

Page 24: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

9

et al., 2012a). The involvement of multiple players makes decision-making process more

complex, because it is hard to have all of the players share the same understanding of goals,

situational status, and decisions (Orasanu & Connolly, 1993).

Third, there is no absolute solution in ill-defined settings, and sometimes multiple or no

solutions exist (Jonassen, 1997; Kitchener, 1983). In clinical decision-making, no single

absolute treatment plan exists, and an appropriate treatment plan for a patient may not work for

another patient. In addition, although a clinician determines his/her solution, multiple players,

who described previously, may not agree with the solution, which can lead to no consensual

agreement.

Fourth, clinical decisions, in most cases, should be made in a world of incomplete and

imperfect resources (Jonassen, 1997; Orasanu & Connolly, 1993). Available resources may be

ambiguous, or sometimes its validity can be suspect (Orasanu & Connolly, 1993). When making

a clinical decision, a clinician often refers to colleagues as a primary resource consulted, because

doing so can serve as one of the quickest ways to obtain information necessary for his/her

decision-making (Vandeweerd et al., 2012b). However, colleagues sometimes may provide

conflicting resources. Also, patients or clients sometimes may incorrectly describe their

experiences, or diagnostic tests may leave open a range of possible diseases.

Decision-Making Theories

Dual process theory. The dual process theory, the most recent theory that explains the

process of decision-making, identifies that humans frequently use two systems to process

information and make a decision. Each system is called type 1 (system 1) and type 2 (system 2)

respectively (Chaiken & Trope, 1999; Croskerry, 2009; Evans, 2003; Evans & Over, 1996;

Hammond, 1996; Kahneman & Frederick, 2005). The two systems work in different

Page 25: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

10

mechanisms: Type 1 users analyze information and make a decision on the basis of heuristics,

whereas type 2 users complete tasks using learned decision-making techniques (McKenzie,

2014). Table 2-1 lists the characteristics of type 1 and type 2 decision-making approaches.

Table 2-1

Characteristics of Type 1 and Type 2 decision-making approaches

Characteristics Type 1 (or System 1) Type 2 (or System 2)

In medical areas Pattern recognition Hypothetico-deductive reasoning

Type of reasoning Experiential-inductive Hypothetico-deductive

Cognitive style Heuristics Systematic

Cognitive awareness Unconscious Deliberate, purposeful

Conscious control Low High

Automaticity High Low

Rate Fast Slow

Errors Normative distribution Few but significant

Effort Low High

Emotional valence High Low

Influence of context High Low

Adapted from Croskerry, P., & Norman, G. (2008). Overconfidence in clinical decision making.

American Journal of Medicine, 121(5 SUPPL.) and Croskerry, P. (2009). A universal model of

diagnostic reasoning. Academic Medicine : Journal of the Association of American Medical

Colleges, 84(8), 1022–1028.

Type 1 decision-making approach: Heuristic, intuitive, and data-driven. Decision-

making on the basis of type 1 decision-making approach is characterized as heuristic, intuitive,

and data-driven (Croskerry & Norman, 2008; Croskerry, 2009; Hardin, 2003b). Type 1

approach assumes that humans accumulate their knowledge and experiences in a form of patterns

and store them in long-term memory (Banning, 2007; Croskerry, 2009; Hardin, 2003b; S. May,

2013). The patterned knowledge and experiences guide the user’s decision-making process

using type 1 approach: the synthesized past experiences serve as references to figure out if an

encountered situation is typical to the user. If the situation is recognized as typical, the user

Page 26: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

11

proceeds the type 1 decision-making approach. In medical areas, the type 1 approach is referred

to as pattern recognition (S. May, 2013), forward reasoning (Hardin, 2003b), or intuitive-

humanist model (Benner 1982, 1984; Young, 1987).

Type 1 approach works quickly (McKenzie, 2014), because it works on the basis of

intuition. Intuition refers to immediate understandings of something without a rationale (Benner

& Tanner, 1987; Schrader & Fischer, 1987). Type 1 users, thus, do not require deliberate efforts

to make a decision (McKenzie, 2014). They intuitively recognize the typicality of the case using

their prior knowledge or experiences and then make an appropriate decision (Evans, 2006;

Klaczynski & Lavallee 2005; Stanovich, 1999). Because this decision-making approach asks

users to utilize their past experiences, this approach is a method that successful experienced

individuals frequently use (Banning, 2007; Hardin, 2003b).

However, the type 1 process may lead to many errors (McKenzie, 2014). Because the

type 1 process works on the basis of an individual’s past experiences, the quality of analysis for

decision-making depends on the user’s experiences (Banning, 2007; King & MacLeod, 2002).

For example, if the individual has not accumulated past experiences, type 1 reasoning may be

restricted. If the individual incorrectly refers to the past experiences, a decision may lead to

unsatisfactory results. Also, because type 1 decisions depend on the user’s intuition, the

accuracy of type 1 decisions may be affected by the users’ overconfidence or other emotional

influences (Banning, 2007; Croskerry & Norman, 2008; Croskerry, 2009; McKenzie, 2014;

Smith et al., 2008).

Type 2 decision-making: Systematic and analytic. Type 2 decision-making is systematic

or analytic (Croskerry, 2009). Whereas the type 1 process works on the basis of intuition

(McKenzie, 2014), the type 2 process involves that decision makers’ thinking processes follow

Page 27: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

12

rational logic based on their knowledge (Banning, 2007; Cockcroft, 2007; Hardin, 2003b). In

medical areas, the type 2 process is referred to as hypothetico-deductive reasoning (Harasym,

1997; Hardin, 2003a), backward reasoning (Hardin, 2003; McKenzie, 2014), or information-

processing model (Joseph & Patel, 1990).

Decision-making on a basis of the type 2 approach involves several stages: cue

recognition and interpretation, hypothesis generation, hypothesis evaluation, and selection of the

leading hypothesis (Barrows & Tamblyn, 1980; Cockcroft, 2007). All individuals, regardless of

their experiences, can use this type of decision-making, because this process can be applied to all

types of problems (Cockcroft, 2007; Hardin, 2003b). Experienced individuals, in particular, tend

to use type 1 processes when a case is familiar and use type 2 processes when a case is not

typical.

Type 2 decision-making process leads to less errors than type 1 processes (McKenzie,

2014). Because the process requires a user’s systematic and logical thinking, the result may be

typically free from overconfidence or other emotional influences (McKenzie, 2014).

The type 2 process, however, takes slow and requires more efforts than Type 1 (Hardin,

2003b; McKenzie, 2014). Type 2 reasoning is limited by working memory capacity and, if

problems require much information to process, working memory will be overloaded (Evans,

2003; Hardin, 2003b; Slovic, Finucane, Peters, & MacGregor, 2004). Also, the type 2 process is

dependent on the information available to construct and guide formalized decision-making

(McKenzie, 2014). Thus, the type 2 process would not produce quality decision-making if either

the quantity or the quality or both of information is limited (Banning, 2007; Harbison, 1991).

As mentioned above, the dual process theory assumes that humans use both type 1 and

type 2 processes to make a decision. For example, in clinical settings, experienced doctors make

Page 28: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

13

intuitive decisions on a basis of their patterned knowledge. The knowledge has been

scientifically and analytically derived from their extensive past experiences. The decisions are

evaluated and justified by applying their scientific knowledge (Hammond, 1996).

Naturalistic decision-making theory. Naturalistic decision-making theory was evolved

in 1989 from the reflection on the previous decision-making studies, which did not fully describe

decision-making processes in the real world (Collyer & Malecki, 1998; Klein & Klinger, 1991;

Klein, 1993, 2008; Lipshitz, Klein, Orasanu, & Salas, 2001; Lipshitz, 1993; Orasanu &

Connolly, 1993). Previous decision-making theories, commonly referred to as classical decision-

making theories, identified optimal ways of making decisions, and researchers suggested that

people should make a decision using the optimal ways (Collyer & Malecki, 1998; Klein &

Klinger, 1991; Klein, 1993, 2008; Lipshitz et al., 2001; Lipshitz, 1993; Orasanu & Connolly,

1993).

The optimal ways of decision-making begin with hypotheses. Decision makers derive

multiple hypotheses from statistical probabilities (Klein, 2007). Decision makers are encouraged

to generate multiple options, identify criteria for evaluating them, rate each option on multiple

criteria using analytical methods, and seek the best option (Klein & Klinger, 1991; Orasanu &

Connolly, 1993) .

However, these optimal ways of making decisions may be not feasible to apply the

findings to many of the real situations (Klein & Klinger, 1991; Klein, 1993, 2008; Lipshitz et al.,

2001; Lipshitz, 1993; Orasanu & Connolly, 1993). The researchers did not account for the

expertise of the decision maker (Orasanu & Connolly, 1993), the task in which a decision-

making resides (Orasanu & Connolly, 1993) and other situational factors, such as time pressure

(Klein & Klinger, 1991). For example, experienced decision makers have a single leading

Page 29: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

14

hypothesis in their mind, rather than having multiple hypotheses, and evaluate if the hypothesis

is correct. Also, people often encounter a situation that needs to make a decision under time

pressure, but exploring multiple options and applying evaluation criteria to each option takes too

long. Thus, the previous decision-making models are most useful in well-structured settings in

which involved problem variables are controllable (Klein, 2008). Naturalistic decision-making

researchers argue that training tools or methods based on the formal standards may not

effectively elevate the quality of decisions made or be adopted in real field settings (Klein, 2008;

Yates, Veinott, & Patalano, 2003).

Naturalistic decision-making, derived from the reflection on the previous decision-

making studies, is an attempt to understand how people actually make decisions in complex real-

world settings (Klein & Klinger, 1991; Klein, 1993, 2008; Lipshitz et al., 2001; Lipshitz, 1993;

Orasanu & Connolly, 1993). The naturalistic decision-making community believes that,

although classical decision-making theory considers decision-making is choosing among known

alternatives, real-world decision-making can be best investigated by a naturalistic approach

(Patel et al., 2005). According to the community, decision makers in real world rely on

heuristics, as opposed to algorithmic strategies, to make a decision under complex conditions,

characterized by limited time, uncertainty, high stakes, vague goals, and unstable conditions

(Klein, 2008).

SOAP (Subjective and Objective observation, Assessment, and Plan). To support

veterinary students’ clinical decision-making process, SOAP, a thinking model as well as a

clinical tool, has been used for many years in a number of colleges of veterinary medicine (S.

May, 2013; Riegger, 2011). The SOAP stands for subjective observation (S), objective

observation (O), assessment (A), and plan (P) (Riegger, 2011). This format well-represents the

Page 30: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

15

process of clinical decision-making, including collecting clinical data and framing data-driven

forward reasoning.

The first two parts represent the phase of data collection (Cameron & Turtle-song, 2002).

Subjective observation asks students to collect information from the perspective of a patient or

owners through history taking (Cameron & Turtle-song, 2002). For example, the patient’s

feelings (e.g., responsive, depressed, or lethargic) and the owner’s concerns and thoughts can be

collected (Cameron & Turtle-song, 2002; Riegger, 2011).

Objective observation is collecting factual and measurable information which is matter-

of-fact and dry without opinions (Cameron & Turtle-song, 2002; Riegger, 2011). There are two

types of objective observation: the veterinarian’s observations and outside written-materials

(Cameron & Turtle-song, 2002). Veterinarian observations include any physical findings that

the veterinarian witnesses. The findings should be precisely and descriptively stated. Outside

written materials include reports obtained from other veterinarians, physical examination results,

or medical records (Cameron & Turtle-song, 2002; Riegger, 2011).

Through subjective and objective observations, veterinarians do an assessment. During

this assessment phase, a veterinarian is asked to analyze and synthesize the data acquired from

the subjective and objective observations (Cameron & Turtle-song, 2002) and list differential

diagnoses and prognosis (Riegger, 2011). The goal of assessment is to identify potential

problems, which will lead to a diagnosis (Riegger, 2011).

Plan, the last stage of SOAP, asks veterinarians to write a plan of therapeutic and

behavioral actions to solve the patient’s problem(s) based on the subjective and objective data,

and assessment (Riegger, 2011). The plan includes (a) any additional diagnostic plans to further

Page 31: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

16

define the problem, (b) treatment plans to address the problem, and (c) plans for communicating

with owners (Riegger, 2011).

Knowledge Application

In order to make a good clinical decision, a veterinarian must possess adequate

knowledge and appropriately apply the knowledge to a given context. Educators often place an

emphasis on the amount or depth of medical knowledge, but more important for clinical

decision-making than the quantity and the quality of knowledge is how the knowledge is used for

decision-making in real context (Cutrer, Sullivan, & Fleming, 2013). When teaching veterinary

learners, instructors should ensure that knowledge the learners have acquired at school is easily

accessible and useful for clinical decision-making in real context (Cutrer et al., 2013). In this

section, veterinary students’ typical difficulty in knowledge application and teaching methods to

enhance the application are discussed.

Veterinary Education and Knowledge Application

Historically, teacher-centered lecture methods were dominant in many higher education

courses, including veterinary education (Whitney, Herron, & Weeks, 1993). The explosion of

knowledge and the size of classes might be the primary driving forces for lectures, especially for

large classes (Fletcher, Hooper, & Schoenfeld-Tacher, 2015). Lectures are one of the most

efficient and cost-effective teaching methods to cover a large body of knowledge to many people

(Campanella & Lygo-Baker, 2014). However, one of the limitations of lectures is that they do

not cover knowledge at a deeper level. For this reason, research that has reported the lack of

knowledge application skills of veterinary students questions the value of knowledge transmitted

by lectures.

Page 32: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

17

Case-based learning has been adopted in many veterinary colleges in order to enhance

students’ abilities to apply learned knowledge to clinical settings (Fletcher et al., 2015). Case-

based learning refers to a pedagogical method that enables students to interpret, reflect on, and

apply their own or someone else’s experiences while participating in real or authentic situations

(Ertmer & Russell, 1995; Kolodner, Owensby, & Guzdial, 2004).

Advocates of case-based learning have credited case-based learning for shifting the focus

of learning from memorization to application by bridging the gap between theory and practice

(Ertmer & Russell, 1995; Flynn & Klein, 2001; Hansen, Ferguson, Sipe, & Sorosky, 2005;

Thistlethwaite et al., 2012; Williams, 1992). Most knowledge learned at schools is abstract and

decontextualized (J. S. Brown et al., 1989). Unsituated knowledge without specific context

where students can connect the dots between knowledge and context often leads the students to

have difficulty in utilizing their learned knowledge for any critical or deep purposes (J. S. Brown

et al., 1989; Gee, 1997). In other words, “what is learned” (p. 32) is separated from “how it is

learned and used” (p. 32) (J. S. Brown et al., 1989). To overcome the separation, it is important

for novices to be enculturated by experiencing real or similar communities and cultures (J. S.

Brown et al., 1989). Case-based learning encourages students to integrate their knowledge into

the context of real or realistic scenarios (Thistlethwaite et al., 2012).

Also, advocates of case-based learning have identified power of stories to explain the

effectiveness of case-based learning. Stories are the “means [by] which human beings give

meaning to their experience of temporality and personal actions” (Polkinghorne, 1988, p. 11).

Stories require less cognitive effort (Jonassen & Hernandez-Serrano, 2002; Williams, 1992),

because their structure has a similar format to our lives, which makes it easier for students to

generate a mental model of the situation (Williams, 1992). Stories were initially advocated due

Page 33: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

18

to their entertainment value (Jonassen & Hernandez-Serrano, 2002) and now have begun to play

a more important role in social science studies (Jonassen & Hernandez-Serrano, 2002). In

particular, stories are helpful to train novices, because they provide vicarious experiences to the

novices who lack direct experiences (Jonassen & Hernandez-Serrano, 2002).

Case-Based Learning and Knowledge Application in Veterinary Education

Many researchers and educators have reported the effects of case-based learning in

veterinary education. Case-based learning in veterinary education is effective in facilitating

knowledge integration, development of expertise, clinical reasoning, problem solving, and

decision-making.

First, case-based learning facilitates verification, application, and integration of

understanding of core concepts to real-life situations (Hansen et al., 2005; Thistlethwaite et al.,

2012). For example, Sharkey, Overmann, and Flash (2007) provided second-year students with

case-based writing assignments, including a clinical history, physical examination findings, and

laboratory data. The results indicated that the students reported increased confidence in

understanding of the content and abilities to apply the knowledge. Students in a study by Malher

et al. (2009) also reported increased confidence in assimilating and integrating the concepts they

previously learned. Grauer, Forrester, Shuman & Sanderson (2008) compared traditional

lecture-based learning and case-based learning for third-year veterinary students. They tested

three differing knowledge: knowledge/factual, knowledge with application, and application and

analysis. The results showed that students in case-based learning group outperformed their peers

in the traditional lecture group in the application and analysis. Monaghan and Yew (2002)

observed that third-year veterinary students who participated in a case-based parasitology

Page 34: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

19

laboratory experienced enhanced understanding of the clinical concepts and valuable clinical

insights.

Second, case-based learning supports the development of expertise (Cannon-Bowers &

Bell, 1997). It is clear from research evidence that experts make clinical decisions based on their

expertise (e.g., Recognition-Primed Decision model, pattern recognition). The ideal situation to

develop the expertise would be having students experience as many real practices as possible

(Ladyshewsky & Jones, 2008), like experts have done through their experiences over time.

However, real clinical settings are characterized by their complexity, uncertainty, high-risk, and

time pressure, making it difficult to train students in real practices (Terry & Higgs, 1993). In

other words, time pressure and personal and professional expectations for satisfactory results can

be demanding for learners to take time out to engage in mindful practice through persistence,

struggle, and self-reflection (Kassirer, 2010).

Third, case-based learning is effective in promoting students’ higher order thinking, such

as clinical reasoning, critical thinking, problem solving, and decision-making (Thistlethwaite et

al., 2012). Case-based learning requires learners to utilize their critical thinking skills to

approach a given case or problem situation. To elaborate, they need to analyze the situation,

identify possible solutions to solve the situation, and suggest the best or better solutions (Flynn &

Klein, 2001). In a study by Mahler, Bareille, Noordhuizen, and Seegers (2009), for example, a

case-based learning approach was employed to help undergraduate veterinary students learn

about dairy herd health consultancy. The students were asked to identify problems and

recommend action plans for implementation. The participants showed positive responses about

the case-based problem in that it was effective in developing their problem-solving skills in the

fields of dairy herd health management. Patterson (2006) examined second-year veterinary

Page 35: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

20

students’ confidence in performing clinical reasoning skills through practicing in small-group

case discussion. The researcher found significant increases in students’ self-confidence in

clinical reasoning skills.

In addition, case-based learning seems to be beneficial in increasing students’ interests in

the topic (Koh et al., 1995; Hansen et al., 2005) and competence in performing tasks (Patterson,

2006). These benefits seem to stem from the features of case-based learning which puts learners

into situations that encourage them to actively participate in the process of problem solving,

including hypotheses generation, data collection, data interpretation, and solution generation

(Kolodner, Hmelo, & Narayanan, 1996).

Educational Implications

Previous research on case-based learning and/or clinical decision-making has suggested

several educational implications that could enhance the clinical decision-making skills: providing

an authentic practice experience, providing an entire cycle of decision-making, providing expert

decision-making processes, and encouraging discussions.

Providing an authentic practice experience. One of factors that prevent novices from

applying knowledge or skills to real contexts is a lack of realistic activities, contexts, and

cultures (J. S. Brown et al., 1989). In order to train novices to deal with the complex, uncertain

real problems, they should be exposed to stories, cases, and problems generated in the real

settings (J. S. Brown et al., 1989; Jonassen & Hernandez-Serrano, 2002).

Also, the effectiveness of case-based learning is dependent on the quality of the cases.

One of the important considerations for creating educationally valuable cases is whether the case

includes authentic problems with real-world challenges (Choi et al., 2013). By providing real or

realistic cases, students can be situated in contexts in which their learned knowledge can be used.

Page 36: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

21

In other words, what they have learned can be closely connected with how the knowledge is used

(J. S. Brown et al., 1989).

Providing an entire decision-making cycle. Case-based learning for enhancing

decision-making skills should reflect the decision-making process in practices (Thistlethwaite et

al., 2012). Williams (1992) made suggestions to include cases that help novices acquire a broad

range of problem-solving skills. By being provided the entire clinical decision-making cycle of

situation assessment, data interpretation, and treatment plan, novices are expected to learn how to

evaluate patient’s problem or clinical data, perform a diagnosis, and provide treatment.

Providing expert decision-making processes. According to the dual process theory and

other research on clinical decision-making, most experienced decision makers use of both

heuristic (i.e., type 1 in dual process theory) and analytical (i.e., type 2 in dual process theory)

cognitive processes to make a decision in real settings (Benner et al., 1992, 1996; Crow & Spicer,

1995; Easen & Wilcockson, 1996; Grobe et al., 1991; Lauri, 1992; Thiele et al., 1991). On top

of that, several researchers (e.g., Norman et al., 1994; Patel et al., 2005; Rikers, Loyens, &

Schmidt, 2004) have found that the heuristic approach, or data-driven approach, better supports

the development of clinical reasoning skills. To help frame novices’ thinking processes, thus, it

is important to provide a decision-making model based on a data-driven approach, such as

naturalistic decision-making models or SOAP tool, rather than the hypothesis-driven approach.

Encouraging discussions. Many theoretical and empirical studies have supported

benefits of discussions in case-based methods (Thurman, Volet, & Bolton, 2009). Case-based

learning does not always involve discussions with colleagues and experts, but collaborative

learning with them can be more effective and efficient than individual learning (Choo, Rotgans,

Yew, & Schmidt, 2011; Flynn & Klein, 2001; Sato, 2013; Thistlethwaite et al., 2012; Thurman

Page 37: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

22

et al., 2009; Uribe, Klein, & Sullivan, 2003). In terms of efficiency, for example, learners can

cover more knowledge in a shorter amount of time than they do alone (Flynn & Klein, 2001). In

terms of effectiveness, discussions can encourage learners to consider learning content more

deeply than they do alone (Mann, Gordon, & MacLeod, 2009).

Discussions, especially with peers, allow learners to challenge others’ ideas more freely

than having discussions with experts (Mann et al., 2009). Schon (1987) emphasized the

importance of an open climate wherein people can freely exchange information, opinions and

feedback. For example, Droge and Spreng (1996) compared instructor-driven and learner-driven

discussions and found that learner-driven discussions were beneficial in terms of efficiency (e.g.,

use of time) and effectiveness (e.g., satisfaction, achievement educational goals, and

competencies).

In addition to this, group learning can enhance understanding of related topics. Levin

(1995) compared the effectiveness of two discussions: one among experienced teachers and the

other among inexperienced teachers. The results indicated that both groups of teachers

benefitted from discussions. In particular, discussions were beneficial in experienced teachers’

reflection and inexperienced teachers’ enhanced understanding of the topic. Also, researchers in

veterinary or medical education (e.g., Volet, Summers, & Thurman, 2009) have observed that

students who participated in collaborative learning activities had a better understanding of

medical knowledge.

Reflection

Research on the characteristics of expert clinicians’ reasoning proves that experts

frequently monitor and manage their cognitive processes to make better decisions (Higgs &

Jones, 2008; Wojcikowski & Brownie, 2013). In other words, the research shows that experts

Page 38: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

23

put more effort into reflection. For example, experienced physicians spent more time on the

verifications of their diagnosis than less experienced physicians did (Patel & Groen, 1991). Also,

experts kept revisiting and refining their initial representations throughout the practice (Chi,

Glaser, & Rees, 1982). What is reflection? What is the role of reflection in decision-making

actions?

Definitions of Reflection

Many researchers have defined reflection based on the characteristics of reflection. Some

researchers have indicated that reflection is a deliberate thinking. Dewey defines reflection as

“active, persistent, and careful consideration of any belief or supposed form of knowledge in the

light of the grounds that support it and the further conclusions to which it tends” (p. 9). He adds

that a state of doubt or uncertainty, such as a complex situation, errors or unexpected

results(Epstein, 1999), encourages individuals to seek possible explanations or solutions of the

unstable state, which results in provoking reflective thinking. Mann et al. (2009) also agree that

awareness of a need can lead to reflection.

Other definitions of reflection have highlighted the role of reflection, especially in

uncertain situations, such as problem solving and/or decision-making. For example, Moon

(1999) defines reflection as “form of mental processing with a purpose and/or anticipated

outcome that is applied to relatively complex or unstructured ideas for which there is not an

obvious solution” (p. 23), and Van Manen (1991) defines reflection as “the connotation of

deliberation, making choices, coming to decisions about alternative courses of action” (p. 511).

Schon also regards reflection as an interaction that occurs between the problem solver and a

surrounding problematic situation.

Page 39: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

24

Some definitions have emphasized that reflection can lead to an improved outcome. For

example, Boud et al. (1985) stated that reflection is “a generic term for those intellectual and

affective activities in which individuals engage to explore their experiences in order to lead to a

new understanding and appreciation” (p. 19).

To summarize, reflection refers to a deliberate thinking about experience, such as one’s

own thinking, goals or content. As a result of reflection, individuals may expect an improved

quality of the outcome.

Benefits of Reflection

Benefits of reflection in general learning. Historically, researchers and educators have

emphasized the importance of reflection, especially in learning (e.g., Bloom, 1956; Boud &

Walker, 1998; Brown, Bransford, Ferrara, & Campione, 1983). The importance of reflection for

the learner stems from two benefits that emerge during the learning process: engaging

individuals in deeper learning and lessening the gaps between theory and practice.

Firstly, reflection can enhance an individual’s abilities to associate and integrate

information, which in turn can lead to deeper learning (Mann et al., 2009). Reflection may not

directly elicit better understanding of knowledge, but it can help students focus more on the

structure of knowledge instead of its superficial features (Davis & Linn, 2000). In paying more

attention to the structure of knowledge, student are then able to engage in knowledge integration

processes like creating links between information, integrating the newly learned knowledge into

their prior knowledge, and restructuring as well as expanding their knowledge system (Davis &

Linn, 2000; Davis, 2003). Many empirical studies provide support to the hypothesis that

reflection can lead to better understanding of knowledge, such as more precise and sophisticated

knowledge (e.g., Land & Zembal-Saul, 2003) and better knowledge integration (e.g., Davis &

Page 40: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

25

Linn, 2000; Davis, 2003) in learning. Specifically within the fields of veterinary and human

medicine, the act and practice of reflection is presumed to allow clinical knowledge to replaced

biomedical knowledge (Mamede & Schmidt, 2004).

Additionally, productive reflection is thought to lessen the gaps between theories and

practices (Mann et al., 2009). Given that theoretical knowledge is abstract and lacks specific

contexts, it could be difficult for novices to immediately make connections between theory and

practice (A. L. Brown, 1987). Reflection, here, helps students dissect the structure of the

knowledge, become aware of their own or someone else’s relevant experiences in learning

processes, and make relationships between the knowledge and the relevant experiences based on

what they know (Moon, 2004; van den Boom, Paas, & van Merriënboer, 2007).

Benefits of reflection in clinical decision-making. Research has shown that reflection

plays a key role in leading successful clinical decision-makings in the fields of general medicine

(Epstein, 1999; Mamede & Schmidt, 2004, 2005), nursing (Murphy, 2004; Rashotte &

Carnevale, 2004), physical therapy (Atkinson & Nixon-Cave, 2011), and other health professions

(Mann et al., 2009). However, researchers still search for why reflection is so important in

making clinical decisions.

Some researchers have found that reflection may be rigorously required in uncertain,

complex, and ill-defined areas (Schon, 1983), and clinical decision-making is one of the ill-

defined areas (Mamede & Schmidt, 2004; Maudsley & Strivens, 2000). Unsurprisingly, research

has also found that performing a task in ill-defined settings is challenging, because there is not a

single absolute solution (Jonassen, 1997). Even if there is an appropriate and proven solution, it

may not work for another similar problem. On top of that, available resources may be

ambiguous, or sometimes its validity can be suspect (Orasanu & Connolly, 1993). Thus, in these

Page 41: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

26

ill-defined settings, reflective processes to plan, monitor, and evaluate the course of actions seem

to be more rigorously required (Bransford, Brown, & Cocking, 2000; Lin, Hmelo, & Kinzer,

1999; Shin et al., 2003). Additionally, during clinical decision-making processes, even experts

can protect themselves against errors if they reflect upon their decision-making process (Higgs &

Jones, 2008). A good decision maker may reflect on the underlying norms, strategies, and

theories implicit in the processes (Higgs & Jones, 2008).

Moreover, reflection can serve as a new learning experience (Schon, 1983, 1988).

Through reflection, a clinician critically reanalyzes the situation and can further identify

underlying concepts. The clinician refines his past experiences or constructs a new experience

for future clinical decision-making. Likewise, novice decision makers who are lacking in

reflective thinking skills may not be able to learn as much as experienced decision makers could

(Ladyshewsky & Jones, 2008).

These findings indicate that it is important to encourage students to reflect upon their

thinking processes while making a clinical decision, which may result in an improved quality of

their decision (Croskerry & Nimmo, 2011; Epstein, 1999; Jones, 1992; Mamede & Schmidt,

2004, 2005). Recently, practitioners learned the importance of reflection in their actual practice

and have incorporated reflection into educational courses or programs ranging from initial

training to continuing education. As a result, the concept of reflection can be found in various

curriculums, including teaching (e.g. Zeichner & Liston, 1987), social work (e.g. Gould &

Taylor, 1996), general medicine (Epstein, 1999; Mamede & Schmidt, 2004, 2005), nursing

(Murphy, 2004; Rashotte & Carnevale, 2004), physical therapy (Atkinson & Nixon-Cave, 2011),

veterinary education (Khosa, Volet, & Bolton, 2014) and other health professions (Mann et al.,

Page 42: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

27

2009) where students are required to closely integrate academic studies and field experiences

(Boud & Walker, 1998).

Scaffolded Reflection

Students often have difficulty engaging in reflective thinking for many reasons. One

such reason is that students rarely learn the importance and benefits of reflection (Boud &

Walker, 1998). On top of that, leaving time for reflection does not always guarantee productive

reflection (Boud & Walker, 1998), because students can fail to reflect mindfully on their

knowledge or a course of action (Woodward, 1998). Without appropriate guidance and

directions, students may spend the time for reflection by doing something else other than

reflection (van den Boom et al., 2007). Thus, scaffolding has been identified as a promising

method to guide students’ reflection in many studies (e.g., Davis & Linn, 2000; Kassirer, 2010;

Khosa et al., 2014; Land & Zembal-Saul, 2003; Quintana, Zhang, & Krajcik, 2005; Yang, 2011).

The term scaffolding originally refers to a physical support used for the construction of a

building, which is later removed when the building's construction is complete. Scaffolding in the

context of learning refers to an aid provided by experts or more knowledgeable people to assist

novices or students to complete or perform a task (Collins et al., 1991, 1987; Wood, Bruner, &

Ross, 1976).

The key concept of scaffolding is that an external support or aid is provided to enable a

learner to accomplish a certain task that would otherwise be out of reach (Wood et al., 1976).

This concept is associated with Vygotsky’s Zone of Proximal Development (ZPD), which

Vygotsky (1978) defines as the gap of competence. To elaborate, this gap is between the state

where the person performs the task by himself and the state where the same person performs a

task with an external aid, which refers to scaffolding. Therefore, the gap of competence refers to

Page 43: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

28

the levels of competence between a person doing a task with and without scaffolding. Thus, in

order to scaffold a student's performance, learning tasks should be situated in the ZPD (Dennen,

2004; Vygotsky, 1978). In the ZPD, students are able to participate in activities beyond their

own performance level, which enables students to utilize cultural tools to adapt themselves to the

specific activity at hand (Dennen, 2004).

Scaffolding a person’s performance assumes that there are shared understandings

between an agent who provides an aid and an aided person (Dennen, 2004). Although agents

and learners may have differing understanding about the task, they eventually share a common

understanding through collaborative interaction. This shared understanding is called

intersubjectivity. Without intersubjectivity, students may experience learning conflicts, low

participation levels, low engagement, or unexpected learning outcomes (Dennen, 2004).

Scaffolding also assumes that people actively construct their own knowledge (Dennen,

2004). Agents provide an external aid for students to facilitate their learning, but that does not

mean that agents are the center of cognitive activity. Learners’ participation has to be key to the

activity in pursuit of the learning goals (Dennen, 2004; Stone, 1993). It should be noted that in

due time, scaffolds must be removed so that learners can ultimately perform a task alone (Sherin

et al., 2004).

Educational Implications

In order to support students’ reflection in the process of decision-making, instructors

need to identify appropriate methods and tools to intentionally encourage students to make their

thinking visible (Boud & Walker, 1998; Linn, 2000). Based on studies on reflection, several

educational implications to facilitate students’ reflection during clinical decision-making were

identified.

Page 44: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

29

Having students take time to reflect. Although reflection is important in learning,

students rarely realize the importance of reflection and thus, fail to engage in reflective thinking

at a deeper level (Boud & Walker, 1998). As indicated previously, reflection is a purposeful

action (Mann et al., 2009). Individuals should take time to reflect upon the actions they selected

and upon the situations they encountered. A five-factor structure of reflective practice found by

Mamede and Schmidt (2004) supports the importance of reflection in clinical practice: deliberate

induction, one of the five factors, refers to taking time to reflect upon an unfamiliar problem.

Likewise, it is important to have or remind novices of taking their time to reflect on their actions

and situations surrounding them.

Providing an opportunity to be exposed to expert performance. It may be important

for novices to receive opportunities to learn how experts use domain knowledge and strategies as

well as make a clinical decision when encountering authentic, uncertain, and ill-defined cases (J.

S. Brown et al., 1989; Williams, 1992). Experts may demonstrate their internal cognitive

process by physically carrying out a task or verbalizing their performance. Verbalization, in

particular, provides an opportunity for students to explicitly observe an expert’s internal

cognitive process as they engage in their decision-making process (Collins et al., 1987; Pedersen

& Liu, 2002). Thus, the expert’s demonstration enables students to build a task-related problem

space in a quicker and more accurate way (Collins et al., 1991, 1987; Williams, 1992), and,

subsequently, they are expected to internalize those behaviors and strategies demonstrated by

experts at the individual level (Collins et al., 1991, 1987).

Being exposed to experts’ decision-making process is expected to significantly benefit

students, since most novices or students approach problem solving by referring to known

examples or developing abstract declarative rules that guide their problem solving (Anderson,

Page 45: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

30

1987; Pirolli & Anderson, 1985). Thus, expert commentary would work for them as an excellent

reference or models of behavior for their performance.

Providing an opportunity to give feedback to others. One of the promising methods to

promote learners’ reflection upon their thinking, their actions or inactions, as well as the contexts

of various situations encourages them to explore other people’s ideas while engaging in

discussions with peers and/or instructors (Boud & Walker, 1998; Ladyshewsky & Jones, 2008;

Rogoff, 1990; van den Boom et al., 2007), which can help the students revisit and reassess their

own actions and thinking (Land & Zembal-Saul, 2003; O’Malley & Scanlon, 1990).

Feedback among peers, in particular, is recommended as an effective means to promote

reflective processes for two reasons. Firstly, peer feedback facilitates cognitive growth

(Ladyshewsky & Jones, 2008; Rogoff, 1990) and influential interaction can occur between

partners of similar status (Rogoff, 1990). Peers who share joint problem-solving activities would

feel freer to examine the logic of arguments than peers who only interact with adults or experts

(Ladyshewsky & Jones, 2008; May & Newman, 1980; Rogoff, 1990; Terry & Higgs, 1993).

Furthermore, the ability to examine the logic of arguments leads to more effective problem

solving (Ladyshewsky & Jones, 2008; May & Newman, 1980; Rogoff, 1990; Terry & Higgs,

1993). The second reason is that peer feedback is more efficient in terms of costs as compared to

having supervised small-group discussions or bedside teaching (Borleffs, Custers, van Gijn, &

ten Cate, 2003; Ladyshewsky & Jones, 2008). From a practical point of view, time pressures and

workloads may be one of the factors that make it difficult for supervisors to explore students’

clinical reasoning in action (Ladyshewsky & Jones, 2008). When peers serve as cognitive

facilitators for one another, a teacher-to-student ratio does not matter in peer interaction activity.

Page 46: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

31

Providing multiple practice experiences. Research suggests that reflective thinking can

be stimulated by complex problems (Epstein, 1999; Mamede & Schmidt, 2005). To elaborate,

reflective thinking skills can be further enhanced when individuals continuously revisit similar

complex problems, due to the fact that one’s reflective thinking ability can be framed and

promoted as expertise develops (Mamede & Schmidt, 2005; Mann et al., 2009). For example,

one study found that learners reported that their reflective thinking ability had improved after

participating in clinical practice experiences (Hallett, 1997). Another study found further

support for the importance of clinical practice experiences (Song, Grabowski, Koszalka, &

Harkness, 2006). According to this second study, college-level learners identified ill-structured

tasks as one of the important elements for their reflective thinking.

Conceptual Framework

Based on the implications from the literature review, several instructional designs to

develop students’ clinical decision-making skills were identified. These instructional designs

can be divided into two purposes: to enhance students’ knowledge application and to promote

their reflection as presented in Figure 2-1. Case-based teaching methods were employed to

enhance veterinary students’ knowledge application, and scaffolded revision activities were

employed to promote their reflection. Specific instructional designs employed for the case-based

teaching methods and scaffolded revision activities are briefly described in the following section.

Page 47: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

32

Figure 2-1

Conceptual framework for this study to enhance veterinary students’ clinical decision-making

skills

Case-based online learning module. To enhance veterinary students’ knowledge

application, case-based teaching methods were employed. A case-based online learning

environment for veterinary students was developed by adapting from Choi’s case-based learning

model (Choi et al., 2013, 2009; Choi & Lee, 2009; Choi, 2009) and findings from other related

research that have been described in the previous sections. To elaborate, the current case-based

module was designed based on the four main educational implications: (1) providing an

authentic experience; (2) providing an entire decision-making cycle; (3) providing expert

decision-making processes; and (4) encouraging discussions.

Page 48: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

33

In order to provide an authentic experience, the case-based online learning module

utilized realistic clinical cases. In particular, the clinical cases were equipped with real-world

challenges (Choi et al., 2013) based on the understanding of the educational objectives of a

course in which the module was implemented (Hansen et al., 2005). More specific descriptions

about each learning element with actual examples will be discussed in the next section, Methods.

In order to provide an entire decision-making cycle which could accurately reflect the

reality of a veterinarian’s practice, the case was organized according to critical decision points of

collecting data, announcing a diagnosis, performing a surgery, prescribing medications, and

identifying a follow-up plan.

In order to provide expert decision-making processes, a decision-making model based on

the expert decision-making process was provided. The suggested decision-making model

consisted of four steps: identifying key information, assessing the case, prioritizing issues and

objectives, and planning a course of action. Also, in order to better support novices’ decision-

making processes, critical thinking prompts were added to frame and guide them in what to think

during decision-making processes.

In order to encourage discussion, peer group discussions were utilized. The students in

the discussion groups were encouraged to share their decision-making processes and outcomes

with their peers and then provide feedback to each other.

Scaffolded revision activities. In order to promote students’ reflective thinking during

the decision-making process, scaffolded revision activities were employed. The scaffolded

revision activities were designed based on the four main educational implications identified

through the literature review: (1) having students take time to reflect; (2) providing an

opportunity to be exposed to expert performance; (3) providing an opportunity to give feedback

Page 49: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

34

to others; and (4) providing multiple practice experiences. More specific descriptions about each

learning element with actual examples will be discussed in the next section, Methods.

First, in order to have students take time to reflect upon their own practices, they were

asked to focus on two key activities: (a) make their own decision, and (b) revisit and revise their

decision. Since novices rarely engage in reflective thinking which is a purposeful action, it is

important to facilitate novices to take time to reflect on their thinking and actions as well as think

upon how to improve the quality of the outcome (Mamede & Schmidt, 2004).

In order to provide an opportunity to be exposed to expert performance and to give

feedback to others, the scaffolded revision activities encouraged novices to explore others’ ideas

with prompts to revisit and reassess their own actions and thinking (Land & Zembal-Saul, 2003).

To elaborate, expert commentary demonstrated how experts approach a clinical case and make a

decision, and peer feedback provided students with opportunities to exchange feedback with

their peers.

Lastly, in order to provide multiple practice experiences, five decision points were

designed within a series of complex problems, which were expected to stimulate novices’

reflective thinking (Mamede & Schmidt, 2005). Given that one’s reflective thinking ability can

be framed and enhanced as their expertise develops, it is important to provide novices with

multiple practice experiences (Mamede & Schmidt, 2005).

Page 50: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

35

CHAPTER 3

METHODS

This chapter provides information about methodology employed for this study. To

elaborate, the first section provides information about participants, the course the participants

recruited from, and interventions the participants received. Next section presents research design

including research questions that this study focused on. Lastly, this chapter describes data

collection and analysis plan.

Participants

The participants for this study were chosen due to their participation in one of the third-

year core courses, specifically the Small Animal Digestive Diseases course. In total, 102 third-

year students were given the opportunity to participate in the study. During the early stages of

data collection, however, only 100 students agreed to participate in the research by filling out the

printed informed consent form.

After giving consent, the instructor of the course introduced a case-based online learning

module to the students. From there, the participants were allowed to self-select between

conditions of scaffolded revision activity: expert commentary only (EC/NP), expert commentary

as well as early peer feedback (EC/EP), and expert commentary as well as later peer feedback

(EC/LP).

Subsequently, over twenty students signed up for each peer feedback session. During the

actual data collection, however, nine students actually participated in the early peer feedback

group, while 13 students participated in the later peer feedback group. The remaining 78

Page 51: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

36

students who did not sign up for or participate in either peer feedback groups were automatically

assigned to expert commentary only.

To balance the number of participants, 25 participants from the EC group receiving

expert commentary only were purposefully selected based on their GPA and gender. Eventually,

participants for this study were 25 students (19 female and 6 male) for the expert commentary

only group (EC/NP), nine students (8 female and 1 male) for the expert commentary with early

peer feedback (EC/EP), and 13 students (11 female and 2 male) for the expert commentary with

later peer feedback group (EC/LP) (see Table 3-1).

The average GPAs of for the EC/NP, EC/EP, and EC/LP groups were 3.52 (SD = .34), 3.

59 (SD = .30) and 3.41 (SD = .40) respectively. To test the homogeneity of the three groups,

Levene’s test was conducted. For the GPA variable, the F value for Levene’s test is 1.209 with a

p value of .308. This result indicates that there is no significant difference between the three

group’s variances.

Table 3-1

Participants’ gender, averaged GPA, and the result of Levene’s Test

Number of participants GPA Levene’s test

Female Male Total Mean SD p

EC/NPa 19 6 25 3.52 .34

.308 EC/EPb 11 2 13 3.59 .30

EC/LPc 8 1 9 3.41 .40

Overall 38 9 47 3.5 .35

Note. aEC/NP indicates the expert commentary only group.

bEC/EP indicates the expert

commentary with early peer feedback group. cEC/LP indicates the expert commentary with later

peer feedback group.

Page 52: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

37

The Small Animal Digestive Diseases Course

Veterinary school students learn about scientific foundations in their first two years, and

then in the following two years, they are increasingly involved in clinically oriented coursework

and finally are trained in completely clinically-based situations. Since third-year students are

required to apply their medical knowledge to make clinical decisions in real clinical settings, it is

the right moment for them to train their clinical decision-making skills with case-based learning.

The course titled Small Animal Digestive Diseases was one of the required courses for

junior students to learn medical knowledge about the diagnosis and management of the medical

and surgical digestive disorders affecting dogs and cats. This course was chosen as a potential

data collection site, because objectives of the course were well aligned with the goals of the case-

based learning environment. Upon the completion of the course, students were expected to do

the following:

Describe major digestive symptoms such as anorexia, weight loss, regurgitation, and

vomiting;

Understand how to clinically and diagnostically approach these digestive problems;

Understand the role of surgery in digestive disease;

Identify digestive issues and generate a basic diagnostic and treatment plan for a clinical

case of digestive disease.

Case-Based Online Learning Module

In this study, a case-based online learning module provided a web-based context in which

students identified, explored, and reconstructed an authentic veterinary case. The case-based

online learning module was developed based on Choi’s case-based learning model (Choi et al.,

Page 53: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

38

2013, 2009; Choi & Lee, 2009) and educational implications that have been identified in this

current study.

Sample pages of the case-based online learning module are presented in Figure 3-1 and

Figure 3-2. Figure 3-1 features the page of an initial decision-making activity with aids of a

realistic case video and critical thinking prompts. Figure 3-2 features the page of a revision

activity with an aid of expert commentary videos.

Figure 3-1

A sample screen of an initial decision-making activity in the case-based learning environment

Page 54: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

39

Figure 3-2

A sample screen of a revision activity in the case-based learning environment

The case-based online learning module was implemented in the Small Animal Digestive

Diseases course to enhance students’ knowledge applications in digestive diseases of small

animals. To elaborate, students were expected to understand and diagnose major digestive

symptoms as well as generate basic treatment plans for a realistic case of digestive diseases upon

the completion of the course and this module. Thus, several learning elements were identified

and then utilized as follows: realistic case videos, a decision-making model based on veterinary

experts’ past experiences along with critical thinking prompts, expert commentary videos,

medical records and digital textbook. More specific information on each learning element is

presented in the following sections.

Page 55: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

40

Case Videos

The first learning element used was the case video, because it visually and auditorily

represented a realistic clinical case (Choi et al., 2013, 2009; Choi & Lee, 2009). A team of

experienced veterinary faculty members identified a realistic clinical case based on their past

experiences. The clinical case provided in the learning module was about a 4-year-old Westie

who had vomited several times at the previous night. It seemed to be some ‘dietary indiscretion’

but the symptoms were persisting. Students, as a doctor of his, were asked to diagnose and treat

him.

For this research, the educators identified five critical points in which students need to

make a decision as presented in Figure 3-4, and the single case video was segmented into five

small case videos accordingly. Square nodes indicate medical cases in which require students to

make a decision, and rounded square nodes indicate options that students can choose from.

Page 56: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

41

Figure 3-3

Five Decision Points in the module

Page 57: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

42

A decision-making model with critical thinking prompts

The second learning element was made up of two integrated parts: the first part being a

decision-making model based on experts’ decision-making process and the second part being

critical thinking prompts to guide each stage of the model (Choi et al., 2013, 2009; Choi & Lee,

2009). The decision-making model was comprised of four phases, [Identifying key information]

– [Assessing the current case] – [Prioritizing issues and objectives] – [Making an immediate plan]

based on the SOAP writing, which is a thinking and decision-making model developed for

veterinary students.

To support students’ decision-making process, the experienced veterinary educators

identified and developed the critical thinking prompts corresponding to each phase of decision-

making. In the phase of Identifying key information, for example, five or more cues were listed,

and students were asked to identify two or more cues that they think more important than others.

In the phase of Assessing the current case, they were asked to describe how they would utilize

the cues identified in the previous phase. In the phase of Prioritizing issues and objectives, they

needed to set their goals for their course of action. In the last phase, Making an immediate plan,

students were asked to select one single action for their patient out of three or four actions and

justify their decision. The prompts were specifically designed for each Decision Point and asked

the students to verbalize their ideas and opinions. The critical thinking prompts at Decision

Point 1 are presented in Table 3-2.

Page 58: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

43

Table 3-2

The question prompts embedded at Decision Point 1 in the learning module

Decision-making

Process

Required Activity Sample question prompts

Identifying key

information

Watching an authentic

case video and

analyzing important

cues

[Multiple-choice question]

From the list provided, please check those items

that you are most heavily considering in your

decision-making process.

1. Known history of eating garbage

2. Physical exam findings

3. Inflammatory leukogram

4. Hemoconcentration (high Hct / TP)

5. Slightly low chloride

6. Presence of focal ileus on radiographs

7. Presence of fluid-filled intestine on

radiographs

Current case

assessment

Identifying and

justifying the problem

situation using the key

information in the

previous step

[Open-ended question]

Please describe how you utilized the cues

selected above in your thought process about the

next action step for Doug.

Prioritizing issues

and objectives

Prioritizing involved

factors that can affect

the final decision

[Multiple-choice question]

You will be asked to make a decision about the

next diagnostic and/or therapeutic step for Doug.

Before you make that decision, you must

consider the needs, goals and objectives for

Doug and his owners. Please write out your

thoughts on each of these aspects (goals,

questions and/or needs), prior to making a

decision.

Page 59: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

44

Decision-making

Process

Required Activity Sample question prompts

Immediate plan Planning and providing

justifications a course of

action based on their

own analysis

[Multiple-choice and Essay questions]

Based on your problem identification and

assessment, please select the single action you

would take next for Doug.

1. Admit Doug to your clinic for supportive

care, in the form of IV fluids,

antiemetics, careful introduction of a

bland diet, and close monitoring.

2. Recommend surgical exploratory of

Doug

3. Investigate further by performing

abdominal ultrasound, abdominocentesis,

and possibly an upper GI barium contrast

study.

Please explain your choice.

Expert commentary videos

The four veterinary faculty members narrated how they would have accessed and solved

the problems if they encountered the case described in the video (Choi et al., 2013, 2009; Choi &

Lee, 2009). The different specialties of the faculty members— three of the faculty members

were Diplomates in the American College of Veterinary Surgery, and one member was

Diplomate in the American College of Veterinary Internal Medicine—allowed diverse

approaches to the given problem. The four experts did not always approach the given problem in

a similar way. The four experts sometimes identified different solutions. Given that an ill-

defined clinical setting is characterized by the involvement of multiple players (Higgs & Jones,

2008; Orasanu & Connolly, 1993), these diverse specialties were helpful to better train students.

Page 60: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

45

At each Decision Point, two or three experts’ commentary videos were embedded. Each

video lasted three to five minutes. Transcripts of the interview videos were delivered in a PDF

form along with the videos.

Scaffolded Revision of the Initial Clinical Decision

In order to enhance veterinary students’ clinical decision-making skills by promoting

their reflection, the participants in this study were encouraged to revisit and make revisions on

their initial decisions while learning with the case-based online learning module (Choi et al.,

2013, 2009; Choi & Lee, 2009).

To better help their revision, they received opportunities to compare their opinions with

others’ opinions before making revisions. Thus, the specific pattern of the activity was [Stage 1:

Making an initial decision] – [Stage 2: Watching expert commentary] – [Stage 3: Exchanging

peer feedback] – [Stage 4: Revising the initial decision-making]. Stages 1, 2 and 4 were

mandatory for all participants, and Stage 3 was optional for those who participated in either early

or later peer feedback session. This pattern was repeated by each Decision Point.

Stage 1: Making an Initial Decision

In the first step of the learning activity, the students watched a short clinical case video.

Based on the case video, the students were asked to identify problems and make an action plan

for the patient. Four critical thinking prompts were provided to assist their decision-making

process of identifying key information, assessing the current case, prioritizing issues and

objectives, and making an immediate plan. The patient’s Medical Records was provided in case

they need more information about the patient. The Digital Textbook including relevant medical

knowledge from textbook and journal articles was also provided to support the students’ content

Page 61: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

46

knowledge. After submitting their answers, they were led to the next step, watching the expert

commentary videos.

Stage 2: Watching Expert Commentary

In the next step, the two or three experts’ commentary videos were presented. In the

videos, the experts narrated about their approaches to the situation presented in the case video.

The videos showed the key characteristics of the expert decision-making process: how they

recognize meaningful cues, assess situation(s), set goals, generate solution(s) and justify the

solution(s). By watching the expert videos, the students had a chance to compare their initial

decision-making responses and those of experts.

Stage 3: Exchanging Peer Feedback

After watching expert commentary videos and before making revisions on their initial

clinical decisions, students were allowed to participate in a peer feedback session. A total of two

peer feedback sessions were available, and participants had a chance to self-select to join either

one or no. In each peer feedback session, the participants completed two Decision Points

(Decision Points 2 and 3 in the first session and Decision 4 and 5 for the second session) with

their partner.

In each peer feedback session, students were encouraged to individually complete the

previous steps, making initial decisions and watching expert commentary videos. Then, they

were asked to share their initial clinical decisions with their partner and discuss how their initial

decisions were similar and different from those of experts and how to revise their responses. To

facilitate the discussion with partner, a worksheet with guidelines for the session and reflective

prompts were provided (See Appendix A for more information). The reflective prompts guided

the students in comparing their partner’s initial responses to expert commentary (e.g., What are

Page 62: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

47

the similarities and differences between the experts’ identification of key information and that of

your partner? Are any important cues missing from your partner’s list?). After sharing feedback

with their partner, the participants were allowed to move onto the next step, revising initial

decisions.

Stage 4: Revising the Initial Decision

After reviewing the experts’ commentary videos, the students were allowed to revise

their initial answers. This revision activity was designed to encourage students to reflect on their

initial answers. To help their revision, their initial answers were automatically retrieved. The

patient medical records and relevant information were still available. After submitting the

revised answers, they were led to the next Decision Point, which had the same pattern of the

learning activity.

Procedures

Data collection was conducted for approximately nine weeks, from early October to mid

December. In the mid October, the instructor of the course had a demonstration session to

introduce the learning module including the purpose of the module, how to access and navigate

the module, and how to learn with the module. Students were asked to complete this learning

module by the second week of December. The instructor also provided information about two

optional peer feedback sessions: students were allowed to self-select which session they would

like to attend. Over twenty students voluntarily registered for each peer feedback session.

The first scaffolded peer feedback session was conducted on the last week of October, in

two weeks from the demonstration session. Decision Point 2 and 3 were assigned to the first

peer feedback session, and an hour was assigned for each Decision Point. Before session began,

one of the veterinary faculty members reminded the participants of the information about the

Page 63: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

48

session including the purpose and procedures of the session. The participants then received a

printed worksheet with the general information of the session including the purpose and

procedure. The worksheet also included question prompts to support the participants’ peer

feedback activity (e.g., What are the similarities and differences between the experts’

identification of key information and that of your partner? Are any important cues missing from

your partner’s list?). The participants were asked to individually make an initial clinical decision

and watch expert commentary videos. Then, they had a 10- or 15-minute of exchanging

feedback with their partner based on the question prompts. They reviewed their partner’s initial

clinical decision, compared the expert clinical decision, and suggested how to revise the initial

decision. Then they were asked to individually revise their initial responses based on the expert

commentary videos and feedback they received from their partner.

The second scaffolded peer feedback session was conducted in mid November, in two

weeks apart from the first peer feedback session. The second session was also conducted at the

same place in which the first peer feedback session occurred. The instructor of the course

reminded the participants of the general information about the session including the purpose and

procedures of the session before the session began. The same worksheets used for the first

session were also provided. The participants individually as well as collaboratively completed

the assigned two Decision Points during the two-hour session, as same as the participants did in

the first peer feedback session.

Upon the completion of the learning module, an online survey embedded in the very last

closing part was distributed to all participants in order to explore their learning experiences.

Also, three face-to-face interviews with three volunteers were conducted: one interviewee

Page 64: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

49

participated in the first peer feedback session (EC/EP group) and the other two interviewees

participated in the second peer feedback session (EC/LP group).

Research Design

The purpose of this study was to attempt to enhance veterinary students’ clinical decision-

making skills through scaffolded revision activities in a case-based learning environment.

Specifically, the scaffolded revision activities in a case-based learning environment were

designed to promote the students’ knowledge acquisition and reflection on their thinking and

actions.

Research Questions

This research was designed to answer the following research questions:

Research Question 1. Do the scaffolded revision activities enhance the quality of

students’ revised clinical decisions in case-based learning?

o RQ1-1. Do the scaffolded revision activities enhance the overall quality of

students’ revised clinical decision in case-based learning?

o RQ1-2. Do the scaffolded revision activities enhance the quality of students’

revised case assessment in case-based learning?

o RQ1-3. Do the scaffolded revision activities enhance the overall quality of

students’ revised prioritization of issues and objectives in case-based learning?

o RQ1-4. Do the scaffolded revision activities enhance the overall quality of

students’ revised plan of an immediate action in case-based learning?

Research Question 2. Do the scaffolded revision activities enhance the quality of

students’ revised clinical decision in case-based learning?

Page 65: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

50

o RQ 2-1. Are there significant differences in the quality of students’ initial and

revised clinical decision among the expert commentary only group (EC/NP),

expert commentary with early peer feedback group (EC/EP), and expert

commentary with later peer feedback group (EC/LP)?

o RQ 2-2. Are there significant differences in the quality of students’ initial and

revised clinical decision between peer feedback groups (EC/EP and EC/LP) and

no peer feedback group (EC/NP)?

o RQ2-3. Are there significant differences in the quality of students’ initial and

revised clinical decision between early peer feedback group (EC/EP) and later

peer feedback group (EC/LP)?

Research Question 3. Are there significant differences in the quality of the initial and

revised clinical decision among groups across the two peer feedback sessions?

o RQ 3-1. Are there significant differences in the quality of students’ initial and

revised clinical decision among the expert commentary only group (EC/NP),

expert commentary with early peer feedback group (EC/EP), and expert

commentary with later peer feedback group (EC/LP) across the two peer feedback

sessions?

o RQ 3-2. Are there significant differences in the quality of students’ initial and

revised clinical decision between peer feedback groups (EC/EP and EC/LP) and

no peer feedback group (EC/NP) across the two peer feedback sessions?

o RQ 3-3. Are there significant differences in the quality of students’ initial and

revised clinical decision between early peer feedback group (EC/EP) and later

peer feedback group (EC/LP) across the two peer feedback sessions?

Page 66: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

51

Research Question 4. Does the participation in a scaffolded revision activity affect

students’ transferred clinical decision-making skills?

o RQ 4-1. Are there significant differences in the scores on a transferred clinical

decision test among EC/NP (expert commentary only), EC/EP (expert

commentary and early peer feedback), and EC/LP (expert commentary and later

peer feedback)?

o RQ 4-2. Are there significant differences in the scores on a transferred clinical

decision test between peer feedback groups (EC/EP and EC/LP) and no peer

feedback group (EC/NP)?

o RQ 4-3. Are there significant differences in the scores on a transferred clinical

decision test between early peer feedback group (EC/EP) and later peer feedback

group (EC/LP)?

Research Question 5. What are the students’ perceptions on the revision activity in the

case-based online learning module?

o RQ 5-1. What are the students’ perceptions on the expert commentary for revising

their initial clinical decisions?

o RQ 5-2. What are the students’ perceptions on the peer feedback for revising their

initial clinical decisions?

o RQ 5-3. What are the students’ perceptions on the effectiveness of the peer

feedback compared to the expert commentary?

A Time-Series Design

A time-series design was used to evaluate the effectiveness of the scaffolded peer

feedback and its timing. A time-series design typically involves multiple observations of data

Page 67: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

52

gathered both prior to and after the intervention (Shadish, Cook, & Campbell, 2002). This

design is based on the idea that researchers are able to identify a pattern by observing data both

prior to and after an intervention. A pattern may show an ongoing increase, decline, flat or

fluctuating. By identifying the pattern before and after the intervention, a researcher is able to

examine the effect of the intervention. The design for this study is diagrammed as in Table 3-3.

Table 3-3

A time series design for this research

S1 (October, 2015) S2 (November, 2015) S3 (December, 2015)

EC/NP O1-Pre X1 O1-Post O2-Pre X1 O2-Post O3

EC/EP O1-Pre X2 O1-Post O2-Pre X1 O2-Post O3

EC/LP O1-Pre X1 O1-Post O2-Pre X2 O2-Post O3

Note. EC/NP, EC/EP, and EC/LP indicate the three groups of the participants in this study:

EC/NP received expert commentary only, EC/EP received expert commentary and early peer

feedback, while EC/LP received expert commentary and later peer feedback. S1, S2 and S3

indicate the three sessions of data collection to measure students’ clinical decision-making skills.

S1 and S2 consist of two decision points embedded in the case-based e-learning environment

respectively. During S1 and S2, a group with peer feedback spent two hours to complete the

assigned two modules, whereas the other groups who did not receive peer feedback

independently spent two weeks to complete the modules. For example, During S1, the EC/EP

group spent two hours to complete the module, while the other groups of EC/NP and EC/LP

independently completed the module within two weeks. S3 took approximately 20 minutes to

complete. O1 to O3 indicate data collections to measure students’ clinical decision-making skills.

O1-Pre and O2-pre indicate students’ initial clinical decisions made in the corresponding session.

O1-Post and O2-Post indicate students’ revised clinical decisions made in the corresponding session.

O3 indicates multiple-choice exam designed to measure students’ near-transferred clinical

decision-making skills. X1 indicates the treatment, scaffolded revision with expert commentary

only, and X2 indicates the treatment, scaffolded revision with expert commentary as well as peer

feedback.

Data Collection and Analysis

To answer the research questions, multiple data were collected and analyzed accordingly.

Data collection and analysis plan is summarized in Table 3-4.

Page 68: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

53

Table 3-4

The research questions, data collection, data source, and data analysis techniques

Research Question Data Collection Data Source Analysis

Technique

RQ1. Does the expert

commentary enhance the

quality of students’ revised

clinical decisions in case-

based learning?

A pre-post design

S1 and S2

ALL Opre X Opost

Assessment,

prioritization, plan,

and average scores

of the student

initial and revised

decision-making

responses

Repeated

ANOVAs

RQ2-1. Are there significant

differences in the quality of

students’ initial and revised

clinical decision among the

expert commentary only

group (EC/NP), expert

commentary with early peer

feedback group (EC/EP), and

expert commentary with later

peer feedback group

(EC/LP)?

A time series with

switching replications

design

S1 and S2

EC/NP Opre X1 Opost

EC/EP Opre X2 Opost

EC/LP Opre X2 Opost

Assessment,

prioritization, plan,

and average scores

of the student

initial and revised

decision-making

responses

Repeated-

measures

ANOVAs

RQ2-2. Are there significant

differences in the quality of

students’ initial and revised

clinical decision between

peer feedback groups (EC/EP

and EC/LP) and no peer

feedback group (EC/NP)?

A time series with

switching replications

design

S1 and S2

EC/NP Opre X1 Opost

EC/EP

&

EC/LP

Opre X2 Opost

Assessment,

prioritization, plan,

and average scores

of the student

initial and revised

decision-making

responses

Repeated-

measures

ANOVAs

RQ2-3. Are there significant

differences in the quality of

students’ initial and revised

clinical decision between

early peer feedback group

(EC/EP) and later peer

feedback group (EC/LP)?

A time series with

switching replications

design

S1 and S2

EC/EP Opre X Opost

EC/LP Opre X Opost

Assessment,

prioritization, plan,

and average scores

of the student

initial and revised

decision-making

responses

Repeated-

measures

ANOVAs

Page 69: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

54

Research Question Data Collection Data Source Analysis

Technique

RQ3-1. Are there significant

differences in the quality of

students’ initial and revised

clinical decision among the

expert commentary only

group (EC/NP), expert

commentary with early peer

feedback group (EC/EP), and

expert commentary with later

peer feedback group (EC/LP)

across the two peer feedback

sessions?

A time series with

switching replications

design

S1 S2

EC/NP O X1 O O X1

O

EC/EP O X2 O O X1

O

EC/LP O X1 O O X2

O

Assessment,

prioritization, plan,

and average scores

of the student

initial and revised

decision-making

responses

Repeated-

measures

ANOVAs

RQ3-2. Are there significant

differences in the quality of

students’ initial and revised

clinical decision between

peer feedback groups (EC/EP

and EC/LP) and no peer

feedback group (EC/NP)

across the two peer feedback

sessions?

A time series with

switching replications

design

S1 S2

EC/NP O X1 O O X1

O

EC/EP

&

EC/LP

O X2 O O X2

O

Assessment,

prioritization, plan,

and average scores

of the student

initial and revised

decision-making

responses

Repeated-

measures

ANOVAs

RQ3-3. Are there significant

differences in the quality of

students’ initial and revised

clinical decision between

early peer feedback group

(EC/EP) and later peer

feedback group (EC/LP)

across the two peer feedback

sessions?

A time series with

switching replications

design

S1 S2

EC/EP O X2 O O X1

O

EC/LP O X1 O O X2

O

Assessment,

prioritization, plan,

and average scores

of the student

initial and revised

decision-making

responses

Repeated-

measures

ANOVAs

Page 70: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

55

Research Question Data Collection Data Source Analysis

Technique

RQ4-1. Are there significant

differences in the scores on a

transferred clinical decision

test among EC/NP (expert

commentary only), EC/EP

(expert commentary and

early peer feedback), and

EC/LP (expert commentary

and later peer feedback)?

A posttest-only design

S1 S2 S3

EC/NP X1 X2 O

EC/EP X2 X1 O

EC/LP X1 X2 O

A multiple-choice

transfer test

One-way

ANOVA

RQ4-2. Are there significant

differences in the scores on a

transferred clinical decision

test between peer feedback

groups (EC/EP and EC/LP)

and no peer feedback group

(EC/NP)?

A posttest-only design

S1 S2 S3

EC/NP X1 X2 O

EC/EP

&

EC/LP

X2 X2 O

A multiple-choice

transfer test

One-way

ANOVA

RQ4-3. Are there significant

differences in the scores on a

transferred clinical decision

test between early peer

feedback group (EC/EP) and

later peer feedback group

(EC/LP)?

A posttest-only design

S1 S2 S3

EC/EP X2 X1 O

EC/LP X1 X2 O

A multiple-choice

transfer test

One-way

ANOVA

RQ5. What are the students’

perceptions on the revision

activity in the case-based

online learning module?

- Online survey and

face-to-face

interviews

-

Clinical decision-making skills

Student initial and revised answers were stored in the web-based data server and

collected to examine students’ clinical decision-making skills. In order to analyze the written

responses that the students generated at each Decision Point, three dimensions were identified as

dependent variables (see Table 3-5).

Page 71: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

56

Table 3-5

Three dimensions of the clinical decision-making skills and their rubric

Clinical decision-

making skills

Rubric

Assessment Accurately interprets value of chosen evidence, findings, data, etc.;

Identifies the strengths and weaknesses of each piece of key information;

Thoughtfully analyzes and evaluates content of key information;

Draws warranted, judicious, appropriate conclusions.

Objectives Identifies the strengths and weaknesses of each piece of key information;

Thoughtfully analyzes and evaluates obvious alternative points of view;

Commits to a judicious priority system and explains assumptions and

reasons.

Choice

Explanation

Thoughtfully analyzes and evaluates content of key information;

Commits to chosen course of action and explains assumptions and

reasons; Addresses the potential for error by proposing a plan for

management of error;

Fair-mindedly follows where evidence and reasons lead.

In order to analyze the quality of students’ clinical decision-making skills, the same panel

of veterinary educators identified ideal decision-making scripts of each Decision Point. Based

on the ideal scripts, the educators graded each student’s initial and revised responses.

Figure 3-4

Scoring system for student decision-making responses

Page 72: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

57

The rules that the veterinary educators used to grade students’ answers are as follows (see

Figure 3-4). First, the educators checked whether a student chose appropriate answers to a

corresponding multiple-choice question. 25% or fewer score was graded when the student chose

wrong answer, and 25% or more was given when the student chose appropriate answers. In case

of appropriate answers, the student’s score depended on his/her justification. When the

justification was poor, the maximum score given was 50%. To elaborate, if the student did not

provide any justification, 50% was given. If the student showed wrong justification, 25% score

would be maximum. In case of noncommittal, or accurate description but not related to the

question, 30% score was given. In case of appropriate multiple-choice answers as well as

justifications, 50% or more score was given. The student’s final score was determined based on

key variables that the student considered in justifying his/her decision-making. Table 3-6

presents an example of key variables that a student is supposed to consider and the ideal answers

of Decision Point 2.

Page 73: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

58

Table 3-6

Sample ideal script at Decision Point 2 “Taking Action for Doug”

Decision-making

Process

What must consider Ideal scripts

Assessing the

current case

1. Description of the

intestine

2. Antimesenteric

enterotomy

3. Indications for a

resection and

anastomosis

4. Perforation

5. No resection is

required

The intestine appears healthy at the site of

the obstruction. While you can see

distention in the area of the foreign body, the

intestinal color is normal and there is no

perforation. Because the intestine appears

healthy, an antimesenteric enterotomy is

indicated to remove the foreign material and

relieve the obstruction. A resection and

anastomosis is not indicated. Indications for

a resection and anastomosis include

devitalized bowel evidence by a dark color

(black or purple) or perforation, neither is

observed. Additionally there is no free fluid

or evidence of peritonitis or serositis.

Prioritizing

issues and

objectives

1. No prognosis

2. Re-establishment of

normal aboral flow of

ingesta

The major objective is to relieve the

intestinal obstruction and re-establish normal

aboral flow of ingesta. The prognosis for

Doug is good because the intestine appears

healthy and no resection is required.

Planning an

immediate action

1. Enteronomy

2. Viability of the bowel

3. Foreign body cannot

be removed

Simple enterotomy is chosen because the

foreign body cannot be moved, but the

bowel is not sufficiently devitalized to

warrant either resection/anastomosis or

euthanasia.

A total of three veterinary educators reviewed similarities between student responses and

the ideal scripts in terms of their correctness and justifications. Because Decision Point 1 was

used for the demonstration session, students’ responses at Decision Point 1 were excluded from

the review. For each Decision Point, two trained raters (veterinary educators) reviewed students’

answers: reviewer A and B were assigned at Decision Point 2, Reviewer A and C were assigned

Page 74: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

59

at Decision Point 3, Reviewer C and B were assigned at Decision Point 4, and Reviewer B and C

were assigned at Decision Point 5. The experts independently reviewed the blind data which

were coded with random numbers for students’ names. Each student received in total of four

scores: scores for assessing case, prioritizing issues and objectives, and planning as well as

overall average of the three scores. The scoring range for student decision-making responses

was between 0 and 100.

In order to calculate the inter-rater reliabilities, 10% of student answers were randomly

selected from each Decision Point. Two trained raters (veterinary educators) evaluated the blind

data independently. When there were big discrepancies (e.g., more than threshold scores:

average score + SD score) in students’ scores given by each of the two raters, scores for these

answers were negotiated and newly identified. For each variable, Pearson’s r was calculated

over the scores of the reviewers as an inter-rater reliability coefficient. The initial average

reliability was r = .769 (p < .01) and the negotiated average reliability was r = .927 (p < .01) as

indicated in Table 3-7. Since the average inter-rater reliability was reached at an acceptable level,

the first rater proceeded to evaluation of the remaining answers. For the analysis, the scores of

the first rater were used.

Table 3-7

Inter-rater reliabilities over the Decision Points

Decision

Point 2

Decision

Point 3

Decision

Point 4

Decision

Point 5

Overall

Before

negotiation

.833**

.820**

.795**

.641**

.769**

After

negotiation

.943**

.937**

.957**

.811**

.927**

* p < .05, ** p < .01

Page 75: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

60

Transferred clinical decision-making skills

To examine students’ transferred clinical decision-making skills, a final test was

conducted upon the completion of the learning module. The final test was embedded at the

conclusion of the learning module. The test included questions about gastrointestinal disease

cases with different symptoms from the case embedded in the case-based learning environment.

The test began with a written case including clinical history and results of physical

examination. The test consisted of six multiple-choice questions which asked students to make a

series of clinical decisions. Scores are given if a choice is correct. Figure 3-5 shows one of the

six questions.

Figure 3-5

A sample question from the final test

Learning experience survey

To determine students’ appreciation of the peer feedback session, the reflection prompts,

and the case-based learning module, the student opinions were collected by means of an

evaluation questionnaire. The questionnaire was embedded in the module, at the closing

Decision Point.

The questionnaire comprises of three groups: learning experiences with expert

commentary only, learning experiences with expert commentary and peer feedback, and

perception on the effectiveness between learning experiences with expert commentary only and

Page 76: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

61

those with an addition of peer feedback. The participants who participated in any peer feedback

session received all three groups of questions, and the participants who did not participate in any

feedback session received the first group of questions only.

For each group of questions, the students could indicate to what extent they agree to it.

Most items were accompanied by a 5-point Likert scale, raining from 1 being strongly disagree

to 5 being strongly agree. Additionally, several follow-up open-ended questions were posed to

deeply investigate students’ experiences with the scaffolded revision activities. The online

survey is appended as Appendix A.

Learning experience interview

To explore students’ experiences with the peer feedback sessions in detail, individual

interviews were conducted after the learning module was completed. Those who participated in

the peer feedback sessions were recruited for the interviews via email with the instructor’s help.

A total of three students voluntarily agreed to participate in the interviews. Interviewee A was a

female student, who participated in the first peer feedback session. Interviewee B and C were

female students, who participated in the second peer feedback session.

During the one-hour face-to-face interviews, the participants were asked to describe their

clinical decision-making processes when doing alone with expert commentary only and doing

with the addition of their partner’s feedback. Specifically, they were asked to describe how they

decided to participate in the peer feedback session, what activities they expected at first, and how

the expectations were different from the real feedback session.

Moreover, they were asked to describe how they made an initial clinical decision and

revised it during learning the module. Also, they were asked to elaborate on what they did when

participating in the peer feedback session, such as how and what they discussed with their

Page 77: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

62

partner and revised their initial responses. The interviews were audio-recorded with the

interviewees’ permission and transcribed for further analysis.

Page 78: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

63

CHAPTER 4

RESULTS

For this research, the three groups’ decision-making scores were collected as presented in

Table 4-1. To examine the effectiveness of the expert commentary videos, peer feedback, timing

of the peer feedback, and interaction effects of the three variables, different combinations of the

data were used.

Table 4-1

Data collection for the quality of student decision-making across groups in three sessions

Session 1 Session 2 Across Sessions 1 & 2 Session 3

Initial Revised Initial Revised Initial Revised Transfer

test

EC/NPa NP1-1 NP1-2 NP2-1 NP2-2 NP-1 NP-2 NP-3

EC/EPb EP1-1 EP1-2 EP2-1 EP2-2 EP-1 EP-2 EP-3

EC/LPc LP1-1 LP1-2 LP2-1 LP2-2 LP-1 LP-2 LP-3

Total SS1-1 SS1-2 SS2-1 SS2-2 AVG-1 AVG-2

Note. aEC/NP indicates the group who received the expert commentary videos only.

bEC/EP

indicates the group who received the expert commentary videos as well as early peer feedback. cEC/LP indicates the group who received the expert commentary videos as well as later peer

feedback.

RQ1. Revision effects of the scaffolded revision activities

Research Question 1. Do the scaffolded revision activities enhance the quality of

students’ revised clinical decision in case-based learning?

o Research Question 1-1. Do the scaffolded revision activities enhance the overall

quality of students’ revised clinical decision in case-based learning?

Page 79: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

64

o Research Question 1-2. Do the scaffolded revision activities enhance the quality

of students’ revised case assessment in case-based learning?

o Research Question 1-3. Do the scaffolded revision activities enhance the overall

quality of students’ revised prioritization of issues and objectives in case-based

learning?

o Research Question 1-4. Do the scaffolded revision activities enhance the overall

quality of students’ revised plan of an immediate action in case-based learning?

The first research question examines the effects of the scaffolded revision activities on

the quality of students’ revised clinical decision in a case-based learning environment. In order

to test whether the quality of the revised clinical decision was significantly improved after

participating in the scaffolded revision activities, all participants’ overall initial and revised

scores from the two data collection sessions were collected. Specifically, the overall quality of

all participants’ initial and revised decisions was tested. If there were significant differences

between the initial and revised decisions, a follow-up test on the three sub-dimensions (i.e., case

assessment, prioritization of issues and objectives, and plan of an immediate action) would be

tested.

Page 80: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

65

Table 4-2

Data used to test the revision effect (Research Question 1)

Session 1 Session 2 Across Sessions 1 & 2 Session 3

Initial Revised Initial Revised Initial Revised Transfer

test

EC/NPa - - - - - - -

EC/EPb - - - - - - -

EC/LPc - - - - - - -

Total - - - - AVG-1 AVG-2 -

Note. aEC/NP indicates the group who received the expert commentary videos only.

bEC/EP

indicates the group who received the expert commentary videos as well as early peer feedback. cEC/LP indicates the group who received the expert commentary videos as well as later peer

feedback.

RQ1-1. Revision effects on the overall quality of the revised clinical decision (Initial vs.

Revised clinical decision)

Hypothesis 1-1. The overall quality of students’ revised clinical decision after

participating in the scaffolded revision activities will be significantly improved than the overall

quality of the initial clinical decision before watching the expert commentary videos in case-

based learning.

Descriptive statistics on the initial and revised decisions from the two sessions are

presented in Table 4-3. As shown in Table 4-3, the average score of initial decisions was 47.61

(SD = 11.23), and the average score of revised decisions was 54.52 (SD = 11.54). Based on the

descriptive data in Table 4-3, a graphical comparison in the overall quality of the initial and

revised clinical decisions is presented in Figure 4-1.

Page 81: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

66

Table 4-3

Descriptive statistics on the quality of the initial and revised case assessment

(N = 47)

Initial Revised

M SD M SD

Overall 47.61 11.23 54.52 11.54

Figure 4-1

The overall quality of initial and revised clinical decisions

To examine whether the quality of students’ initial answers were significantly improved

after participating in the scaffolded revision activities, a repeated-measures ANOVA with the

overall quality of the students’ initial and revised clinical decisions was conducted (Table 4-4).

Page 82: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

67

Table 4-4

Summary of repeated-measures ANOVA for the overall quality of the initial and revised clinical

decisions

Source Sum of Squares df Mean square F p η2

Within subjects

Revisiona 2241.582 1 2241.582 44.081 .000 .489

Error (Revision) 2339.175 46 50.852

Between subjects

Intercept 122548.936 1 122548.936 1047.263 .000 .958

Error 5382.843 46 117.018

Note. aRevision indicates the quality improvement of the revised clinical decision after

watching the expert commentary videos.

As described in Table 4-4, the results revealed a significant main effect, F(1, 46) =

44.081, p < .05, indicating that there were statistically significant differences in the quality

between the initial (M = 47.61, SD = 11.23) and revised decisions (M = 54.52, SD = 11.54). In

other words, the overall quality of the initial decision-making was significantly improved after

participating in the scaffolded revision activities. The difference between the initial and revised

answers was a large effect, with η2 = .489 (Cohen, 1988).

RQ1-2. Revision effects on the quality of assessment (Initial vs. Revised)

Hypothesis 1-2. The quality of students’ revised case assessment after participating in the

scaffolded revision activities will be significantly improved than the quality of the initial case

assessment before participating in the scaffolded revision activities.

Since the overall quality of the revised clinical decisions was significantly enhanced after

participating in the scaffolded revision activities, follow-up tests were conducted to examine

whether significant differences among the three sub-dimensions exist. In the follow-up analysis,

Page 83: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

68

multiple repeated-measures ANOVAs were employed to compare the quality of the initial and

revised clinical decisions in terms of the three dimensions—case assessment, prioritization of

issues and objectives, and plan of an immediate action.

Descriptive statistics on the initial and revised case assessment are described in Table 4-

5. As shown in Table 4-5, the average score of initial case assessment was 46.00 (SD = 14.37),

and the average score of revised decisions was 50.60 (SD = 15.70).

Table 4-5

Descriptive statistics on the quality of the initial and revised case assessment

(N = 47)

Initial Revised

M SD M SD

Assessment 46.00 14.37 50.60 15.70

In order to identify whether the differences between the initial and revised case

assessment are statistically significant, the qualities of all participants’ responses in case

assessment were analyzed using a repeated measures ANOVA with revision (initial answers and

revised answers) as a within-subjects factor.

Page 84: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

69

Table 4-6

Summary of repeated-measures ANOVA for the qualities of the initial and revised case

assessment

Source Sum of Squares df Mean

square

F p η2

Within subjects

Revisiona 994.980 1 994.980 10.714 .002 .189

Error (Revision) 4272.082 46 92.871

Between subjects

Intercept 109624.096 1 109624.096 539.469 .000 .921

Error 9347.545 46 203.207

Note. aRevision indicates the qualities of the initial and revised clinical decision.

As described in Table 4-6, the main effect of revision was significant, F(1, 46) = 10.714,

p < .05. This result indicated that there were statistically significant differences between the

qualities of the initial (M = 46.00, SD = 14.37) and revised case assessment (M = 50.60, SD =

15.70). In other words, the overall quality of the students’ revised case assessment was

significantly improved than the initial case the quality of the initial case assessment. The

difference between the initial and revised case assessment was a small effect, with η2 = .189

(Cohen, 1988).

RQ1-3. Revision effects on the quality of prioritization (Initial vs. Revised)

Hypothesis 1-3. The quality of students’ revised prioritization of issues and objectives

after participating in the scaffolded revision activities will be significantly improved than the

quality of the initial prioritization before participating in the scaffolded revision activities.

Page 85: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

70

Descriptive statistics on the initial and revised prioritization of issues and objectives are

described in Table 4-7. As shown in Table 4-7, the average score of initial prioritization was

46.68 (SD = 11.49), and the average score of revised decisions was 52.29 (SD = 13.05).

Table 4-7

Descriptive statistics on the quality of the initial and revised prioritization of issues and

objectives

(N = 47)

Initial Revised

M SD M SD

Prioritization 46.68 11.49 52.29 13.05

To identify whether the differences between the initial and revised prioritization of issues

and objectives are statistically significant, the qualities of all participants’ responses in

prioritization were analyzed using a repeated measures ANOVA with revision (initial answers

and revised answers) as a within-subjects factor.

Page 86: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

71

Table 4-8

Summary of repeated-measures ANOVA for the qualities of the initial and revised prioritization

of issues and objectives

Source Sum of Squares df Mean square F p η2

Within subjects

Revisiona 1480.086 1 1480.086 25.294 .000 .355

Error (Revision) 2691.726 46 58.516

Between subjects

Intercept 115075.141 1 115075.141 842.958 .000 .948

Error 6279.624 46 136.514

Note. aRevision indicates the qualities of the initial and revised clinical decision.

As described in Table 4-8, the main effect of revision was significant, F(1, 46) = 25.294,

p < .05. This result indicated that there were statistically significant differences between the

qualities of the initial (M = 46.68, SD = 11.49) and revised prioritization (M = 52.59, SD =

13.05). In other words, the overall quality of the students’ revised prioritization of issues and

objectives was significantly improved than the initial case the quality of the initial prioritization.

The difference between the initial and revised prioritization was a medium effect, with η2 = .355

(Cohen, 1988).

RQ1-4. Revision effects on the quality of plan (Initial vs. Revised)

Hypothesis 1-4. The quality of students’ revised plan of an immediate action after

participating in the scaffolded revision activities will be significantly improved than the quality

of the initial plan before participating in the scaffolded revision activities.

Page 87: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

72

Descriptive statistics on the quality of the initial and revised plan of an immediate action

are summarized in Table 4-9. As shown in Table 4-9, the average score of initial plan was 50.16

(SD = 13.62), and the average score of revised decisions was 60.66 (SD = 13.30).

Table 4-9

Descriptive statistics on the quality of the initial and revised plan of an immediate action

(N = 47)

Initial Revised

M SD M SD

Plan 50.16 13.62 60.66 13.30

To identify whether the differences between the initial and revised plan of an immediate

action are statistically significant, the qualities of all participants’ responses in plan were

analyzed using a repeated measures ANOVA with revision (initial answers and revised answers)

as a within-subjects factor.

Table 4-10

Summary of repeated-measures ANOVA for the quality of the initial and revised plan of an

immediate action

Source Sum of Squares df Mean

square

F p η2

Within subjects

Revisiona 5187.001 1 5187.001 40.124 .000 .466

Error (Revision) 5946.561 46 129.273

Between subjects

Intercept 144314.237 1 144314.237 969.175 .000 .955

Error 6849.591 46 148.904

Note. aRevision indicates the qualities of the initial and revised clinical decision.

Page 88: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

73

As described in Table 4-10, the main effect of revision was significant, F(1, 46) = 40.124,

p < .05. This result indicated that there were statistically significant differences between the

qualities of the initial (M = 50.16, SD = 13.62) and revised prioritization (M = 60.66, SD =

13.30). In other words, the overall quality of the students’ revised plan of an immediate action

was significantly improved than the initial case the quality of the initial plan. The difference

between the initial and revised plan was a large effect, with η2 = .466 (Cohen, 1988). Based on

the descriptive data, a graphical comparison in the quality of the three sub-dimensions of initial

and revised clinical decisions is presented in Figure 4-2.

Figure 4-2

The quality of the three sub-dimensions of the initial and revised clinical decisions

RQ2. Revision effects by groups

Research Question 2. Do the scaffolded revision activities enhance the quality of

students’ revised clinical decision in case-based learning?

Page 89: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

74

o Research Question 2-1. Are there significant differences in the quality of students’

initial and revised clinical decision among the expert commentary only group

(EC/NP), expert commentary with early peer feedback group (EC/EP), and expert

commentary with later peer feedback group (EC/LP)?

o Research Question 2-2. Are there significant differences in the quality of students’

initial and revised clinical decision between peer feedback groups (EC/EP and

EC/LP) and no peer feedback group (EC/NP)?

o Research Question 2-3. Are there significant differences in the quality of students’

initial and revised clinical decision between early peer feedback group (EC/EP)

and later peer feedback group (EC/LP)?

The second research question examines the quality of the initial and revised clinical

decisions among the groups based on the three diverse scaffolded revision activities. To test the

interaction effect between revision and group, multiple repeated-measures ANOVAs were

conducted with three different group variations: (1) the first comparison was among EC/NP,

EC/EP, EC/LP groups on the basis of the quality of the initial and revised clinical decisions; (2)

the second comparison was between the EC/NP group with expert commentary only to the other

two groups, EC/EP and EC/LP, with the addition of peer feedback; and (3) the last comparison

was between the two peer feedback groups, EC/EP and EC/LP, depending on the timing of the

peer feedback.

As presented in Table 4-11, students’ overall initial and revised scores across three

groups were collected to address the Research Question 2. For each of the sub-questions under

Research Question 2, the overall quality of all participants’ initial and revised decisions was

tested. If there were significant differences between the initial and revised decisions, a follow-up

Page 90: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

75

test on the three sub-dimensions (i.e., case assessment, prioritization of issues and objectives, and

plan of an immediate action) would be tested.

Table 4-11

Data used to test the two-way interaction effect of revision and group (Research Question 2)

Session 1 Session 2 Across Sessions 1 & 2 Session 3

Initial Revised Initial Revised Initial Revised Transfer

test

EC/NPa - - - - NP-1 NP-2 -

EC/EPb - - - - EP-1 EP-2 -

EC/LPc - - - - LP-1 LP-2 -

Total - - - - - -

Note. aEC/NP indicates the group who received the expert commentary videos only.

bEC/EP

indicates the group who received the expert commentary videos as well as early peer feedback. cEC/LP indicates the group who received the expert commentary videos as well as later peer

feedback.

RQ2-1. Revision effects among groups (EC/NP vs. EC/EP vs. EC/LP)

Hypothesis 2-1. There will be significant differences between the qualities of initial and

revised clinical decision among the expert commentary only group (EC/NP), expert commentary

with early peer feedback group (EC/EP), and expert commentary with later peer feedback group

(EC/LP) in case-based learning.

To test the Hypothesis 2-1, the initial and revised clinical decision scores of the

participants in the three groups were collected. A total of four different kinds of scores on the

quality of the clinical decision were calculated: scores on the quality of the three sub-dimensions

(case assessment, prioritization of issues and objectives, and plan of an immediate action) and

overall average of the three sub-dimensions scores.

RQ2-1-1. On the overall quality of the decision (EC/NP vs. EC/EP vs. EC/LP).

Hypothesis 2-1-1. There will be significant differences between the overall quality of the initial

Page 91: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

76

and revised clinical decision among the expert commentary only group (EC/NP), expert

commentary with early peer feedback group (EC/EP), and expert commentary with later peer

feedback group (EC/LP) in case-based learning.

Descriptive statistics on the quality of the initial and revised decisions of the participants

in the EC/NP, EC/EP, and EC/LP are summarized in Table 4-12. As shown in Table 4-12, the

average scores of initial decision of the EC/NP, EC/EP, and EC/LP are 45.30 (SD = 11.02),

51.31 (SD = 12.47), and 49.49 (SD = 10.59), respectively. The average scores of revised

decision of EC/NP, EC/EP, and EC/LP are 50.82 (SD = 12.93), 59.76 (SD = 7.42), and 57.99

(SD = 8.82), respectively.

Table 4-12

Descriptive statistics on the overall quality of the initial and revised clinical decisions among

EC/NP, EC/EP, and EC/LP

Initial Revised

M SD M SD

EC/NPa (n=25) 45.30 11.02 50.82 12.93

EC/EPb (n=9) 51.31 12.47 59.76 7.42

EC/LPc (n=13) 49.49 10.59 57.99 8.82

Note. aEC/NP indicates the group who received the expert commentary videos only.

bEC/EP

indicates the group who received the expert commentary videos as well as early peer feedback. cEC/LP indicates the group who received the expert commentary videos as well as later peer

feedback.

To examine whether the differences between the initial and revised decisions by three

groups were significant, a repeated-measures ANOVA was conducted (see Table 4-13). The

results revealed a significant revision effect (F = 43.541, p < .05) indicating that there were

statistically significant differences between the initial and revised answers. In other words, the

overall quality of the initial answers was significantly improved through the revision in the case-

Page 92: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

77

based online module. The difference between the initial and revised answers was a large effect,

with η2 = .497 (Cohen, 1988).

On the other hand, the interaction effect between revision and group was not significant

(F = 1.004, p > .05) indicating that the differences in the quality of the initial and revised clinical

decisions were similar across the groups. In other words, the overall quality of the revised

decisions made by the three groups was significantly improved, but no significant differences

among groups existed. Thus, hypothesis 2-1 which assumes different quality of the initial and

revised clinical decisions across the three groups was denied.

Table 4-13

Summary of repeated-measures ANOVA for the overall quality of the initial and revised clinical

decisions in EC/NP, EC/EP, and EC/LP

Source Sum of

Squares

df Mean

square

F p η2

Within subjects

Revisiona 2213.708 1 2213.708 43.541 .000 .497

Revisiona x GroupA

b 102.126 2 51.063 1.004 .375 .044

Error (Revisiona) 2237.049 44 50.842

Between subjects

Intercept 108560.688 1 108560.688 978.109 .000 .957

GroupAb 499.266 2 249.633 2.249 .117 .093

Error 4883.577 44 110.990

Note. aRevision indicates the qualities of the initial and revised clinical decision.

bGroupA

indicates the three different groups— expert commentary only group (EC/NP), expert

commentary with early peer feedback group (EC/EP), and expert commentary with later peer

feedback group (EC/LP).

Page 93: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

78

RQ2-1-2. On the quality of case assessment (EC/NP vs. EC/EP vs. EC/LP).

Hypothesis 2-1-2. There will be significant differences between the qualities of initial and

revised case assessment among the expert commentary only group (EC/NP), expert commentary

with early peer feedback group (EC/EP), and expert commentary with later peer feedback group

(EC/LP) in case-based learning.

Descriptive statistics on the quality of the initial and revised assessment in the three

groups are presented in Table 4-14. As shown in Table 4-14, the average scores of initial case

assessment of EC/NP, EC/EP, and EC/LP are 44.46 (SD = 15.79), 50.69 (SD = 14.18), and 45.69

(SD = 11.70), respectively. The average scores of revised case assessment of EC/NP, EC/NP,

EC/EP, and EC/LP are 48.07 (SD = 16.97), 57.58 (SD = 9.57), and 50.62 (SD = 16.07),

respectively.

Table 4-14

Descriptive statistics on the quality of the initial and revised case assessment among EC/NP,

EC/EP, and EC/LP

Assessment

Initial Revised

M SD M SD

EC/NPa (n=25) 44.46 15.79 48.07 16.97

EC/EPb (n=9) 50.69 14.18 57.58 9.57

EC/LPc (n=13) 45.69 11.70 50.62 16.07

Note. aEC/NP indicates the group who received the expert commentary videos only.

bEC/EP

indicates the group who received the expert commentary videos as well as early peer feedback. cEC/LP indicates the group who received the expert commentary videos as well as later peer

feedback.

Page 94: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

79

Since the hypothesis 2-1 assuming significant differences between the initial and revised

clinical decisions across three groups was denied, the follow-up analysis to compare the

differences in the quality of case assessment, one of the three-dimensions, was not conducted.

RQ2-1-3. On the quality of prioritization of issues and objectives (EC/NP vs. EC/EP

vs. EC/LP). Hypothesis 2-1-3. There will be significant differences between the qualities of

initial and revised prioritization of issues and objectives among the expert commentary only

group (EC/NP), expert commentary with early peer feedback group (EC/EP), and expert

commentary with later peer feedback group (EC/LP) in case-based learning.

Descriptive statistics on the quality of the initial and revised prioritization in the three

groups are presented in Table 4-15. As shown in Table 4-15, the average scores of initial

prioritization of EC/NP, EC/EP, and EC/LP are 44.99 (SD = 11.63), 48.44 (SD = 11.40), and

48.69 (SD = 11.68), respectively. The average scores of revised case assessment of EC/NP,

EC/NP, EC/EP, and EC/LP are 48.39 (SD = 13.76), 55.08 (SD = 9.54), and 57.85 (SD = 11.93),

respectively.

Table 4-15

Descriptive statistics on the quality of the initial and revised prioritization among the EC/NP,

EC/EP, and EC/LP

Prioritization

Initial Revised

M SD M SD

EC/NPa (n=25) 44.99 11.63 48.39 13.76

EC/EPb (n=9) 48.44 11.40 55.08 9.54

EC/LPc (n=13) 48.69 11.68 57.85 11.93

Note. aEC/NP indicates the group who received the expert commentary videos only.

bEC/EP

indicates the group who received the expert commentary videos as well as early peer feedback. cEC/LP indicates the group who received the expert commentary videos as well as later peer

feedback.

Page 95: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

80

Since the hypothesis 2-1 assuming significant differences between initial and revised

clinical decisions among three groups was denied, the follow-up analysis to compare the

differences in the quality of prioritization of issues and objectives, one of the three-dimensions,

was not conducted.

RQ2-1-4. On the quality of plan of an immediate action (EC/NP vs. EC/EP vs.

EC/LP). Hypothesis 2-1-4. There will be significant differences between the qualities of initial

and revised plan of an immediate action among the expert commentary only group (EC/NP),

expert commentary with early peer feedback group (EC/EP), and expert commentary with later

peer feedback group (EC/LP) in case-based learning.

Descriptive statistics on the quality of the initial and revised plan in the three groups are

presented in Table 4-16. As shown in Table 4-16, the average scores of initial plan of EC/NP,

EC/EP, and EC/LP are 46.44 (SD = 12.65), 54.81 (SD = 14.39), and 54.10 (SD = 13.83),

respectively. The average scores of revised case assessment of EC/NP, EC/EP, and EC/LP are

56.00 (SD = 14.70), 66.61 (SD = 8.64), and 65.52 (SD = 9.96), respectively.

Table 4-16

Descriptive statistics on the quality of the initial and revised plan among EC/NP, EC/EP, and

EC/LP

Plan

Initial Revised

M SD M SD

EC/NPa (n=25) 46.44 12.65 56.00 14.70

EC/EPb (n=9) 54.81 14.39 66.61 8.64

EC/LPc (n=13) 54.10 13.83 65.52 9.96

Note. aEC/NP indicates the group who received the expert commentary videos only.

bEC/EP

indicates the group who received the expert commentary videos as well as early peer feedback. cEC/LP indicates the group who received the expert commentary videos as well as later peer

feedback.

Page 96: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

81

Since the hypothesis 2-1 assuming significant differences between initial and revised

clinical decisions among three groups was denied, the follow-up analysis to compare the

differences in the quality of plan of an immediate action, one of the three-dimensions, was not

conducted.

RQ2-2. Revision effects of peer feedback (EC/NP vs. EC/EP and EC/LP)

Hypothesis 2-2. There will be significant differences in the quality of students’ initial and

revised clinical decisions between groups with peer feedback (EC/EP and EC/LP) and without

peer feedback (EC/NP) in case-based learning.

Research question 2-2 examines the effects of the peer feedback on the quality of

students’ revised clinical decisions. To test the interaction effects between revision and group,

the initial and revised clinical decision responses of the participants in the peer feedback groups

and no peer feedback group were collected. A total of four different kinds of scores on the

quality of the clinical decision were calculated: scores on the quality of the three sub-dimensions

(case assessment, prioritization of issues and objectives, and plan of an immediate action) and

overall averages of the three sub-dimensions scores.

RQ2-2-1. On the overall quality of the decision (EC/NP vs. EC/EP and EC/LP).

Hypothesis 2-2-1. The students who received peer feedback (EC/EP and EC/LP) will outperform

the students who did not receive peer feedback (EC/NP) in terms of the overall quality of initial

and revised clinical decision.

Descriptive statistics on the quality of the initial and revised decisions between peer

feedback and no peer feedback groups are summarized in Table 4-17. As described in Table 4-

17, the average scores of initial decision of peer feedback and no peer feedback groups are 50.24

(SD = 11.14) and 45.30 (SD = 11.02), respectively. The average scores of revised decision of

Page 97: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

82

peer feedback groups and no peer feedback group are 58.72 (SD = 8.14) and 50.82 (SD = 12.93),

respectively.

Table 4-17

Descriptive statistics of the overall quality of the initial and revised clinical decisions between

peer feedback (EC/EP and EC/LP) and no peer feedback groups (EC/NP)

Overall

Initial Revised

M SD M SD

Peer feedback groupa (n=22) 50.24 11.14 58.72 8.14

No peer feedback groupb (n=25) 45.30 11.02 50.82 12.93

Note. aPeer feedback group indicates EC/EP group (n=9) who received the expert commentary

videos as well as early peer feedback and EC/LP group (n=13) who received the expert

commentary videos as well as later peer feedback. bNo peer feedback group indicates EC group

who received the expert commentary videos only.

To examine whether the differences between the initial and revised decisions by the

treatment and control groups were significant, a repeated-measures ANOVA was conducted (see

Table 4-18). The results revealed a significant revision effect (F = 46.142, p < .05) indicating

that there were statistically significant differences between the initial and revised answers

throughout the two sessions. In other words, the overall quality of the initial answers was

significantly improved through the revision in the case-based online module. The difference

between the initial and revised answers was a large effect, with η2 = .506 (Cohen, 1988).

On the other hand, the interaction effect between revision and group was not significant

(F = 2.054, p > .05) indicating that the differences between the qualities of the initial and revised

clinical decisions were similar regardless of the groups. Thus, the hypothesis 3-1 which assumes

that the initial and revised scores of the students who were in the scaffolded group will be higher

than those of the students in the no scaffolded group was denied.

Page 98: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

83

Table 4-18

Summary of repeated-measures ANOVA on the overall quality of the initial and revised clinical

decisions in the peer feedback (EC/EP and EC/LP) and no peer feedback groups (EC/NP)

Source Sum of

Squares

df Mean square F p η2

Within subjects

Revisiona 2293.816 1 2293.816 46.142 .000 .506

Revisiona x GroupB

b 102.110 1 102.110 2.054 .159 .044

Error (Revision) 2237.065 45 49.713

Between subjects

Intercept 123030.910 1 123030.910 1129.718 .000 .962

GroupBb 482.160 1 482.160 4.427 .041 .090

Error 4900.683 45 108.904

Note. aRevision indicates the quality of the initial and revised clinical decision.

bGroupB

indicates two different groups—peer feedback groups (EC/EP and EC/LP: Expert commentary

with early or later peer feedback) and no peer feedback group (EC/NP: Expert commentary

only).

RQ2-2-2. On the quality of case assessment (EC/NP vs. EC/EP and EC/LP).

Hypothesis 2-2-2. The students who received peer feedback (EC/EP and EC/LP) will outperform

the students who did not receive peer feedback (EC/NP) in terms of the quality of initial and

revised case assessment.

Descriptive statistics on the quality of the initial and revised clinical decisions in the two

groups are presented in Table 4-19. As shown in Table 4-19, the average scores of initial

decision of peer feedback groups and no peer feedback group are 47.74 (SD = 12.70) and 44.46

(SD = 15.79), respectively. The average scores of revised decision of peer feedback groups and

no peer feedback group are 53.47 (SD = 13.95) and 48.07 (SD = 16.97), respectively.

Page 99: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

84

Table 4-19

Descriptive statistics on the quality of the initial and revised case assessment between peer

feedback (EC/EP and EC/LP) and no peer feedback groups (EC/NP)

Case Assessment

Initial Revised

M SD M SD

Peer feedback groupa (n=22) 47.74 12.70 53.47 13.95

No peer feedback groupb (n=25) 44.46 15.79 48.07 16.97

Note. aPeer feedback group indicates EC/EP group (n=9) who received the expert commentary

videos as well as early peer feedback and EC/LP group (n=13) who received the expert

commentary videos as well as later peer feedback. bNo peer feedback group indicates EC/NP

group who received the expert commentary videos only.

Since the hypothesis 3-1 assuming significant differences between initial and revised

clinical decisions by two groups was denied, the follow-up analysis to compare the differences in

the quality of case assessment, one of the three-dimensions, was not conducted.

RQ2-2-3. On the quality of prioritization of issues and objectives (EC/NP vs. EC/EP

and EC/LP). Hypothesis 2-2-3. The students who received peer feedback (EC/EP and EC/LP)

will outperform the students who did not receive peer feedback (EC/NP) in terms of the quality

of initial and revised prioritization of issues and objectives.

Descriptive statistics on the quality of the initial and revised clinical decisions in the two

groups are presented in Table 4-20. As shown in Table 4-20, the average scores of initial

decision of peer feedback groups and no peer feedback group are 48.59 (SD = 11.29) and 44.99

(SD = 11.63), respectively. The average scores of revised decision of peer feedback groups and

no peer feedback group are 56.72 (SD = 10.86) and 48.39 (SD = 13.76), respectively.

Page 100: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

85

Table 4-20

Descriptive statistics on the quality of the initial and revised prioritization between peer

feedback (EC/EP and EC/LP) and no peer feedback groups (EC/NP)

Prioritization

Initial Revised

M SD M SD

Peer feedback groupa (n=22) 48.59 11.29 56.72 10.86

No peer feedback groupb (n=25) 44.99 11.63 48.39 13.76

Note. aPeer feedback group indicates EC/EP group (n=9) who received the expert commentary

videos as well as early peer feedback and EC/LP group (n=13) who received the expert

commentary videos as well as later peer feedback. bNo peer feedback group indicates EC/NP

group who received the expert commentary videos only.

Since the hypothesis 3-1 assuming significant differences between initial and revised

clinical decisions by two groups was denied, the follow-up analysis to compare the differences in

the quality of prioritization of issues and objectives, one of the three-dimensions, was not

conducted.

RQ2-2-4. On the quality of plan of an immediate action (EC/NP vs. EC/EP and

EC/LP). Hypothesis 2-2-4. The students who received peer feedback (EC/EP and EC/LP) will

outperform the students who did not receive peer feedback (EC/NP) in terms of the quality of

initial and revised plan of an immediate action.

Descriptive statistics on the quality of the initial and revised clinical decisions in the two

groups are presented in Table 4-21. As shown in Table 4-21, the average scores of initial

decision of peer feedback groups and no peer feedback group are 54.39 (SD = 13.72) and 46.44

(SD = 12.65), respectively. The average scores of revised decision of peer feedback groups and

no peer feedback group are 65.97 (SD = 9.24) and 56.00 (SD = 14.70), respectively.

Page 101: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

86

Table 4-21

Descriptive statistics of the quality of the initial and revised plan between peer feedback (EC/EP

and EC/LP) and no peer feedback groups (EC/NP)

Plan

Initial Revised

M SD M SD

Peer feedback groupa (n=22) 54.39 13.72 65.97 9.24

No peer feedback groupb (n=25) 46.44 12.65 56.00 14.70

Note. aPeer feedback group indicates EC/EP group (n=9) who received the expert commentary

videos as well as early peer feedback and EC/LP group (n=13) who received the expert

commentary videos as well as later peer feedback. bNo peer feedback group indicates EC/NP

group who received the expert commentary videos only.

Since the hypothesis 3-1 assuming significant differences between initial and revised

clinical decisions by two groups was denied, the follow-up analysis to compare the differences in

the quality of prioritization of issues and objectives, one of the three-dimensions, was not

conducted.

RQ2-3. Revision effects of the timing of the peer feedback (EC/EP vs. EC/LP)

Hypothesis 2-3. There will be significant differences in the quality of students’ initial and

revised clinical decisions between the early peer feedback group (EC/EP) and later peer feedback

group (EC/LP) in case-based learning.

Research question 2-3 examines the effects of the timing of the peer feedback on the

quality of students’ revised clinical decisions. To test the interaction effects between revision

and group, the initial and revised clinical decision responses of the participants in the early and

later peer feedback groups were collected. A total of four different kinds of scores on the quality

of the clinical decision were calculated: scores on the quality of the three sub-dimensions (case

assessment, prioritization of issues and objectives, and plan of an immediate action) and overall

averages of the three sub-dimensions scores.

Page 102: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

87

RQ2-3-1. On the overall quality of the decision (EC/EP vs. EC/LP). Hypothesis 2-3-1.

There will be significant differences in the overall quality of the initial and revised clinical

decisions between the early peer feedback group (EC/EP) and later peer feedback group (EC/LP)

in case-based learning.

Descriptive statistics on the quality of the initial and revised clinical decisions in the two

groups are presented in Table 4-22. As shown in Table 4-22, the average scores of initial

decision of EC/EP and EC/LP are 51.31 (SD = 12.47) and 49.49 (SD = 10.59), respectively. The

average scores of revised decision of EC/EP and EC/LP are 59.76 (SD = 7.42) and 57.99 (SD =

8.82), respectively.

Table 4-22

Descriptive statistics on the overall quality of the initial and revised clinical decisions between

EC/EP and EC/LP

Overall

Initial Revised

M SD M SD

EC/EPa (n=9) 51.31 12.47 59.76 7.42

EC/LPb (n=13) 49.49 10.59 57.99 8.82

Note. aEC/EP indicates the group who received the expert commentary videos as well as early

peer feedback. bEC/LP indicates the group who received the expert commentary videos as well

as later peer feedback.

To examine whether the differences between the initial and revised decisions between

two groups were significant, a repeated-measures ANOVA was conducted (see Table 4-23). The

results revealed a significant revision effect (F = 20.404, p < .05) indicating that there were

statistically significant differences between the initial and revised answers. In other words, the

overall quality of the initial answers was significantly improved through the revision in the case-

Page 103: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

88

based online module. The difference between the initial and revised answers was a large effect,

with η2 = .505 (Cohen, 1988).

On the other hand, the interaction effect between revision and group was not significant

(F = .213, p > .05) indicating that the differences in the quality of the initial and revised clinical

decisions were similar across the early and later peer feedback groups. Thus, hypothesis 3-1

which assumes different quality of the initial and revised clinical decisions by groups was denied.

Table 4-23

Summary of a repeated-measures ANOVA for the overall quality of the initial and revised

clinical decisions in the EC/EP and EC/LP groups

Source Sum of

Squares

df Mean

square

F p η2

Within subjects

Revisiona 1526.926 1 1526.926 20.404 .000 .505

Revisiona x GroupC

b .016 1 .016 .000 .988 .000

Error (Revision) 1496.708 20 74.835

Between subjects

Intercept 63511.085 1 63511.085 790.001 .000 .975

GroupCb 17.106 1 17.106 .213 .650 .011

Error 1607.873 20 80.394

Note. aRevision indicates the qualities of the initial and revised clinical decision.

bGroupC

indicates the two different groups— expert commentary with early peer feedback group

(EC/EP) and expert commentary with later peer feedback group (EC/LP).

RQ2-3-2. On the quality of case assessment (EC/EP vs. EC/LP). Hypothesis 2-3-2.

There will be significant differences in the quality of the initial and revised case assessment

between the early peer feedback group (EC/EP) and later peer feedback group (EC/LP) in case-

based learning.

Page 104: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

89

Descriptive statistics on the quality of the initial and revised clinical decisions in the two

groups are presented in Table 4-24. As shown in Table 4-24, the average scores of initial

decision of early peer feedback group and later peer feedback group are 50.69 (SD = 14.18) and

45.69 (SD = 11.70), respectively. The average scores of revised decision of early peer feedback

group and later peer feedback group are 57.58 (SD = 9.57) and 50.62 (SD = 16.07), respectively.

Table 4-24

Descriptive statistics of the qualities of the initial and revised case assessment between EC/EP

and EC/LP

Case assessment

Initial Revised

M SD M SD

EC/EPa (n=9) 50.69 14.18 57.58 9.57

EC/LPb (n=13) 45.69 11.70 50.62 16.07

Note. aEC/EP indicates the group who received the expert commentary videos as well as early

peer feedback. bEC/LP indicates the group who received the expert commentary videos as well

as later peer feedback.

Since the hypothesis 2-3 assuming significant differences between initial and revised

clinical decisions between two groups was denied, the follow-up analysis to compare the

differences in the quality of case assessment, one of the three-dimensions, was not conducted.

RQ2-3-3. On the quality of prioritization of issues and objectives (EC/EP vs. EC/LP).

Hypothesis 2-3-3. There will be significant differences in the quality of the initial and revised

prioritization of issues and objectives between the early peer feedback group (EC/EP) and later

peer feedback group (EC/LP) in case-based learning.

Descriptive statistics on the quality of the initial and revised clinical decisions in the two

groups are presented in Table 4-25. As shown in Table 4-25, the average scores of initial

decision of early peer feedback group and later peer feedback groups are 48.44 (SD = 11.40) and

Page 105: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

90

48.69 (SD = 11.68), respectively. The average scores of revised decision of peer feedback group

and no peer feedback groups are 55.08 (SD = 9.54) and 57.85 (SD = 11.93), respectively.

Table 4-25

Descriptive statistics on the quality of the initial and revised prioritization between EC/EP and

EC/LP

Prioritization

Initial Revised

M SD M SD

EC/EPa (n=9) 48.44 11.40 55.08 9.54

EC/LPb (n=13) 48.69 11.68 57.85 11.93

Note. aEC/EP indicates the group who received the expert commentary videos as well as early

peer feedback. bEC/LP indicates the group who received the expert commentary videos as well

as later peer feedback.

Since the hypothesis 2-3 assuming significant differences between initial and revised

clinical decisions among three groups was denied, the follow-up analysis to compare the

differences in the quality of prioritization of issues and objectives, one of the three-dimensions,

was not conducted.

RQ2-3-4. On the quality of plan of an immediate action (EC/EP vs. EC/LP).

Hypothesis 2-3-4. There will be significant differences in the quality of the initial and revised

plan of an immediate action between the early peer feedback group (EC/EP) and later peer

feedback group (EC/LP) in case-based learning.

Descriptive statistics on the quality of the initial and revised clinical decisions in the two

groups are presented in Table 4-26. As shown in Table 4-26, the average scores of initial

decision of EC/EP and EC/LP are 54.81 (SD = 14.39) and 54.10 (SD = 13.83), respectively. The

average scores of revised decision of EC/EP and EC/LP are 66.61 (SD = 8.64) and 65.52 (SD =

9.96), respectively.

Page 106: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

91

Table 4-26

Descriptive statistics of the qualities of the initial and revised plan between expert commentary

with early peer feedback group (EC/EP) and expert commentary with later peer feedback group

(EC/LP)

Plan

Initial Revised

M SD M SD

EC/EPa (n=9) 54.81 14.39 66.61 8.64

EC/LPb (n=13) 54.10 13.83 65.52 9.96

Note. aEC/EP indicates the group who received the expert commentary videos as well as early

peer feedback. bEC/LP indicates the group who received the expert commentary videos as well

as later peer feedback.

Since the hypothesis 2-3 assuming significant differences between initial and revised

clinical decisions among three groups was denied, the follow-up analysis to compare the

differences in the quality of plan of an immediate action, one of the three-dimensions, was not

conducted.

RQ3. Revision effects by groups across sessions

Research Question 3. Are there significant differences in the quality of the initial and

revised clinical decision among groups across two sessions?

o Research Question 3-1. Are there significant differences in the quality of students’

initial and revised clinical decision among EC/NP (expert commentary only),

EC/EP (expert commentary and early peer feedback), and EC/LP (expert

commentary and later peer feedback) across two sessions?

o Research Question 3-2. Are there significant differences in the quality of students’

initial and revised clinical decision between peer feedback groups (EC/EP and

EC/LP) and no peer feedback group (EC/NP) across two sessions?

Page 107: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

92

o Research Question 3-3. Are there significant differences in the quality of students’

initial and revised clinical decision between the early peer feedback group (EC/EP)

and later peer feedback group (EC/LP) across two sessions?

The third research question examines whether there is any significant difference in the

quality of the initial and revised clinical decisions among groups across two sessions. To test the

three-way interaction effect among revision, group and session, and three different group

variations across two sessions were tested.

Table 4-27

Data used to test the three-way interaction effect (Revision x group x session) (Research

Question 3)

Session 1 Session 2 Across Sessions 1 & 2 Session 3

Initial Revised Initial Revised Initial Revised Transfer

test

EC/NPa NP1-1 NP1-2 NP2-1 NP2-2 - - -

EC/EPb EP1-1 EP1-2 EP2-1 EP2-2 - - -

EC/LPc LP1-1 LP1-2 LP2-1 LP2-2 - - -

Total - - - - - -

Note. aEC/NP indicates the group who received the expert commentary videos only.

bEC/EP

indicates the group who received the expert commentary videos as well as early peer feedback. cEC/LP indicates the group who received the expert commentary videos as well as later peer

feedback.

RQ3-1. Revision effects among groups across sessions (EC/NP vs. EC/EP vs. EC/LP)

Hypothesis 3-1. There will be significant differences in the quality of students’ initial and

revised clinical decision among the expert commentary only group (EC/NP), expert commentary

with early peer feedback group (EC/EP), and expert commentary with later peer feedback group

(EC/LP) across two sessions.

Page 108: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

93

In order to test the hypothesis, the initial and revised clinical decision responses of all

participants were collected. A total of four different kinds of scores on the quality of the clinical

decision were calculated: scores on the quality of the three sub-dimensions (case assessment,

prioritization of issues and objectives, and plan of an immediate action) and overall averages of

the three sub-dimensions scores.

RQ3-1-1. On the overall quality of the decision across sessions (EC/NP vs. EC/EP vs.

EC/LP). Hypothesis 3-1-1. There will be significant differences in the overall quality of the

initial and revised clinical decision among the expert commentary only group (EC/NP), expert

commentary with early peer feedback group (EC/EP), and expert commentary with later peer

feedback group (EC/LP) across two sessions.

Descriptive statistics on the quality of the initial and revised clinical decisions in the

EC/NP, EC/EP, and EC/LP across the two sessions are presented in Table 4-28.

Table 4-28

Descriptive statistics of the overall quality of the initial and revised clinical decisions among the

EC/NP, EC/EP, and EC/LP across two sessions

Overall

Session 1 Session 2

Initial Revised Initial Revised

M SD M SD M SD M SD

EC/NPa (n=25) 39.79 15.18 45.68 16.33 50.81 9.52 55.96 11.64

EC/EPb (n=9) 47.50 13.95 61.04 11.14 55.13 15.43 58.48 13.03

EC/LPc (n=13) 38.72 15.53 49.58 14.07 60.27 12.36 66.41 7.78

Note. aEC/NP indicates the group who received the expert commentary videos only.

bEC/EP

indicates the group who received the expert commentary videos as well as early peer feedback. cEC/LP indicates the group who received the expert commentary videos as well as later peer

feedback.

Page 109: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

94

To examine whether the differences between the initial and revised decisions by three

groups were significant, a repeated-measures ANOVA with two within variables (the initial and

revised decision-making responses and two sessions) and one between variable (group) was

conducted. Also, the interaction effects for revision by group effects, revision by session, and

revision by session by group effects were considered to test the hypothesis.

The results showed that there were no statistically significant effects for the revision and

group interaction (F = 1.004, p > .05) and the three-way interaction among the revision, session,

and group (F = 1.781, p > .05). This result indicates that there was no significant difference

between the three groups’ overall quality of the initial and revised clinical decisions across the

two sessions.

Page 110: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

95

Table 4-29

Summary of a repeated-measures ANOVA for the overall quality of the initial and revised

clinical decisions among the EC/NP, EC/EP, and EC/LP

Source Sum of

Squares

df Mean

square

F p η2

Within subjects

Revisiona 4427.415 1 4427.415 43.541 .000 .497

Revisiona x GroupA

b 204.253 2 102.126 1.004 .375 .044

Error (Revisiona) 4474.098 44 101.684

Sessionc 2298.826 1 2298.826 27.574 .000 .385

Sessionc x GroupA

b 755.114 2 377.557 4.529 .016 .171

Error (Session) 3668.295 44 83.370

Revisiona x Session

c 536.559 1 536.559 6.257 .016 .124

Revisiona x Session

c x GroupA

b 305.453 2 152.727 1.781 .180 .075

Error (Revisiona x Session

c) 3773.386 44 85.759

Between subjects

Intercept 217121.376 1 217121.376 978.109 .000 .957

GroupAb 998.532 2 499.266 2.249 .117 .093

Error 9767.155 44 221.981

Note. aRevision indicates the initial and revised decision-making scores.

bGroupA indicates the

three different groups— expert commentary only group (EC/NP), expert commentary with

early peer feedback group (EC/EP), and expert commentary with later peer feedback group

(EC/LP). cSession indicates the two sessions comprising of two Decision Points—The first

session was comprised of DP2 and 3, and the second session was comprised of DP4 and 5.

RQ3-1-2. On the quality of case assessment across sessions (EC/NP vs. EC/EP vs.

EC/LP). Hypothesis 3-1-2. There will be significant differences in the quality of the initial and

revised case assessment among the expert commentary only group (EC/NP), expert commentary

Page 111: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

96

with early peer feedback group (EC/EP), and expert commentary with later peer feedback group

(EC/LP) across two sessions.

Descriptive statistics on the quality of the initial and revised assessment in the EC/NP,

EC/EP, and EC/LP are presented in Table 4-30. Since the hypothesis 3-1 assuming significant

differences between initial and revised clinical decisions among the three groups was denied, the

follow-up analysis to compare the differences in the quality of case assessment, one of the three-

dimensions, was not conducted.

Table 4-30

Descriptive statistics of the overall quality of the initial and revised case assessment among

EC/NP, EC/EP, and EC/LP across two sessions

Assessment

Session 1 Session 2

Initial Revised Initial Revised

M SD M SD M SD M SD

EC/NPa (n=25) 31.82 21.52 36.70 22.26 57.10 15.28 59.44 16.56

EC/EPb (n=9) 40.11 17.59 52.83 19.45 61.28 15.89 62.33 14.86

EC/LPc (n=13) 28.04 18.17 37.08 26.44 63.35 12.49 64.15 12.75

Note. aEC/NP indicates the group who received the expert commentary videos only.

bEC/EP

indicates the group who received the expert commentary videos as well as early peer feedback. cEC/LP indicates the group who received the expert commentary videos as well as later peer

feedback.

RQ3-1-3. On the quality of prioritization of issues and objectives across sessions

(EC/NP vs. EC/EP vs. EC/LP). Hypothesis 3-1-3. There will be significant differences in the

quality of the initial and revised prioritization among the expert commentary only group

(EC/NP), expert commentary with early peer feedback group (EC/EP), and expert commentary

with later peer feedback group (EC/LP) across two sessions.

Page 112: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

97

Descriptive statistics on the quality of the initial and revised assessment in the EC/NP,

EC/EP, and EC/LP are presented in Table 4-31. Since the hypothesis 3-1 assuming significant

differences between initial and revised clinical decisions among the three groups was denied, the

follow-up analysis to compare the differences in the quality of prioritization of issues and

objectives, one of the three-dimensions, was not conducted.

Table 4-31

Descriptive statistics of the overall quality of the initial and revised prioritization of issues and

objectives among the EC/NP, EC/EP, and EC/LP across two sessions

Prioritization

Session 1 Session 2

Initial Revised Initial Revised

M SD M SD M SD M SD

EC/NPa (n=25) 32.96 16.68 37.12 19.45 57.02 13.88 59.66 13.08

EC/EPb (n=9) 37.50 13.71 53.44 9.62 59.39 16.46 56.72 14.39

EC/LPc (n=13) 36.00 19.66 48.39 18.68 61.39 11.91 67.31 11.70

Note. aEC/NP indicates the group who received the expert commentary videos only.

bEC/EP

indicates the group who received the expert commentary videos as well as early peer feedback. cEC/LP indicates the group who received the expert commentary videos as well as later peer

feedback.

RQ3-1-4. On the quality of plan of an immediate action across sessions (EC/NP vs.

EC/EP vs. EC/LP). Hypothesis 3-1-4. There will be significant differences in the quality of the

initial and revised plan among the expert commentary only group (EC/NP), expert commentary

with early peer feedback group (EC/EP), and expert commentary with later peer feedback group

(EC/LP) across two sessions.

Descriptive statistics of the quality of the initial and revised assessment in the EC/NP,

EC/EP, and EC/LP are presented in Table 4-32. Since the hypothesis 3-1 assuming significant

differences between initial and revised clinical decisions among the three groups was denied, the

Page 113: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

98

follow-up analysis to compare the differences in the quality of plan of an immediate action, one

of the three-dimensions, was not conducted.

Table 4-32

Descriptive statistics of the overall quality of the initial and revised plan of an immediate action

among EC/NP, EC/EP, and EC/LP across two sessions

Prioritization

Session 1 Session 2

Initial Revised Initial Revised

M SD M SD M SD M SD

EC/NPa (n=25) 54.58 19.81 63.22 17.39 38.30 15.35 48.78 17.46

EC/EPb (n=9) 64.89 21.34 76.83 12.60 44.72 20.53 56.39 17.46

EC/LPc (n=13) 52.12 18.18 63.27 16.48 56.08 21.33 67.77 17.45

Note. aEC/NP indicates the group who received the expert commentary videos only.

bEC/EP

indicates the group who received the expert commentary videos as well as early peer feedback. cEC/LP indicates the group who received the expert commentary videos as well as later peer

feedback.

RQ3-2. Revision effects of peer feedback across sessions (EC/NP vs. EC/EP and EC/LP)

Hypothesis 3-2. There will be significant differences in the quality of students’ initial and

revised clinical decisions between peer feedback groups (EC/EP and EC/LP) and no peer

feedback group (EC/NP) across sessions.

In order to test the hypothesis, the initial and revised clinical decision responses of all

participants were collected. A total of four different kinds of scores on the quality of the clinical

decision were calculated: scores on the quality of the three sub-dimensions (case assessment,

prioritization of issues and objectives, and plan of an immediate action) and overall averages of

the three sub-dimensions scores.

RQ3-2-1. On the overall quality of the decision across sessions (EC/NP vs. EC/EP

and EC/LP). Hypothesis 3-2-1. The students who received peer feedback (EC/EP and EC/LP)

Page 114: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

99

will outperform the students who did not receive peer feedback (EC/NP) in terms of the overall

quality of initial and revised clinical decision across two sessions.

Descriptive statistics on the quality of the initial and revised decisions between peer

feedback groups (EC/EP and EC/LP) and no peer feedback groups (EC/NP) across sessions are

summarized in Table 4-33.

Table 4-33

Descriptive statistics of the overall qualities of the initial and revised clinical decisions between

peer feedback (EC/EP and EC/LP) and no peer feedback groups (EC/NP) across two sessions

Session 1 Session 2

Initial Revised Initial Revised

M SD M SD M SD M SD

Peer feedback

groupa (n=22)

42.31 15.21 54.27 13.92 58.17 13.59 63.17 10.73

No peer feedback

groupb (n=25)

39.79 15.18 45.68 16.33 50.81 9.52 55.96 11.64

Note. aPeer feedback group indicates EC/EP group (n=9) who received the expert commentary

videos as well as early peer feedback and EC/LP group (n=13) who received the expert

commentary videos as well as later peer feedback. bNo peer feedback group indicates EC/NP

group who received the expert commentary videos only.

To examine whether the differences between the initial and revised decisions by the peer

feedback and no peer feedback groups were significant, a repeated-measures ANOVA with two

within variables (the initial and revised decision-making responses and two sessions) and one

between variable (group) was conducted (see Table 4-34). Also, the interaction effects for

revision by group effects, revision by session, and revision by session by group effects were

considered to test the hypothesis.

The results showed that there were no statistically significant effects for the revision and

group interaction (F = 2.054, p > .05) and the three-way interaction among the revision, session,

Page 115: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

100

and group (F = 2.639, p > .05). This result indicates that there was no significant difference

between the peer feedback and no peer feedback groups in the overall quality of the initial and

revised clinical decisions across the two sessions.

Table 4-34

Summary of repeated-measures ANOVA for the quality of the initial and revised case assessment

in the peer feedback and no peer feedback groups

Source Sum of

Squares

df Mean

square

F p η2

Within subjects

Revisiona 4587.631 1 4587.631 46.142 .000 .506

Revisiona x GroupB

b 204.220 1 204.220 2.054 .159 .044

Error (Revisiona) 4474.131 45 99.425

Sessionc 3102.966 1 3102.966 31.692 .000 .413

Sessionc x GroupB

b 17.487 1 17.487 .179 .675 .004

Error (Session) 4405.922 45 97.909

Revisiona x Session

c 346.418 1 346.418 4.046 .050 .082

Revisiona x Session

c x GroupB

b 225.971 1 225.971 2.639 .111 .055

Error (Revisiona x Session

c) 3852.868 45 85.619

Between subjects

Intercept 246061.820 1 246061.820 1129.718 .000 .962

GroupAb 964.320 1 964.320 4.427 .041 .090

Error 9801.366 45 217.808

Note. aRevision indicates the initial and revised decision-making scores.

bGroupB indicates

two different groups—peer feedback groups (EC/EP and EC/LP: Expert commentary with early

or later peer feedback) and no peer feedback group (EC/NP: Expert commentary only). cSession indicates the two sessions comprising of two Decision Points—The first session was

comprised of DP2 and 3, and the second session was comprised of DP4 and 5.

Page 116: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

101

RQ3-2-2. On the quality of case assessment across sessions (EC/NP vs. EC/EP and

EC/LP). Hypothesis 3-2-2. The students who received peer feedback (EC/EP and EC/LP) will

outperform the students who did not receive peer feedback (EC/NP) in terms of the quality of

initial and revised case assessment across two sessions.

Descriptive statistics on the quality of the initial and revised case assessment between

peer feedback groups (EC/EP and EC/LP) and no peer feedback groups (EC/NP) across sessions

are summarized in Table 4-35. Since the hypothesis 3-2 assuming significant differences

between initial and revised clinical decisions between peer feedback and no peer feedback

groups was denied, the follow-up analysis to compare the differences in the quality of case

assessment, one of the three-dimensions, was not conducted.

Table 4-35

Descriptive statistics of the quality of the initial and revised case assessment between peer

feedback (EC/EP and EC/LP) and no peer feedback groups (EC/NP) across two sessions

Session 1 Session 2

Initial Revised Initial Revised

M SD M SD M SD M SD

Peer feedback

groupa (n=22)

32.98 18.53 43.52 24.63 62.50 13.66 63.41 13.34

No peer feedback

groupb (n=25)

31.82 21.52 36.70 22.26 57.10 15.28 59.44 16.56

Note. aPeer feedback group indicates EC/EP group (n=9) who received the expert commentary

videos as well as early peer feedback and EC/LP group (n=13) who received the expert

commentary videos as well as later peer feedback. bNo peer feedback group indicates EC/NP

group who received the expert commentary videos only.

RQ3-2-3. On the quality of prioritization of issues and objectives across sessions

(EC/NP vs. EC/EP and EC/LP). Hypothesis 3-2-3. The students who received peer feedback

(EC/EP and EC/LP) will outperform the students who did not receive peer feedback (EC/NP) in

Page 117: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

102

terms of the quality of initial and revised prioritization of issues and objectives across two

sessions.

Descriptive statistics on the quality of the initial and revised prioritization between peer

feedback groups (EC/EP and EC/LP) and no peer feedback groups (EC/NP) across sessions are

summarized in Table 4-36. Since the hypothesis 3-2 assuming significant differences between

initial and revised clinical decisions between peer feedback and no peer feedback groups was

denied, the follow-up analysis to compare the differences in the quality of prioritization of issues

and objectives, one of the three-dimensions, was not conducted.

Table 4-36

Descriptive statistics on the quality of the initial and revised prioritization of issues and

objectives between peer feedback (EC/EP and EC/LP) and no peer feedback groups (EC/NP)

across two sessions

Session 1 Session 2

Initial Revised Initial Revised

M SD M SD M SD M SD

Peer feedback

groupa (n=22)

36.61 17.12 50.46 15.53 60.57 13.62 62.98 13.62

No peer feedback

groupb (n=25)

32.96 16.68 37.12 19.45 57.02 13.88 59.66 13.08

Note. aPeer feedback group indicates EC/EP group (n=9) who received the expert commentary

videos as well as early peer feedback and EC/LP group (n=13) who received the expert

commentary videos as well as later peer feedback. bNo peer feedback group indicates EC/NP

group who received the expert commentary videos only.

RQ3-2-4. On the quality of plan of an immediate action across sessions (EC/NP vs.

EC/EP and EC/LP). Hypothesis 2-2-4. The students who received peer feedback (EC/EP and

EC/LP) will outperform the students who did not receive peer feedback (EC/NP) in terms of the

quality of initial and revised plan of an immediate action across two sessions.

Page 118: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

103

Descriptive statistics on the quality of the initial and revised plan between peer feedback

groups (EC/EP and EC/LP) and no peer feedback groups (EC/NP) across sessions are

summarized in Table 4-37. Since the hypothesis 3-2 assuming significant differences between

initial and revised clinical decisions between peer feedback and no peer feedback groups was

denied, the follow-up analysis to compare the differences in the quality of plan of an immediate

action, one of the three-dimensions, was not conducted.

Table 4-37

Descriptive statistics on the quality of the initial and revised plan of an immediate action

between peer feedback (EC/EP and EC/LP) and no peer feedback groups (EC) across two

sessions

Session 1 Session 2

Initial Revised Initial Revised

M SD M SD M SD M SD

Peer feedback

groupa (n=22)

57.34 20.09 68.82 16.19 51.43 21.29 63.11 17.97

No peer feedback

groupb (n=25)

54.58 19.81 63.22 17.39 38.3 15.35 48.78 17.46

Note. aPeer feedback group indicates EC/EP group (n=9) who received the expert commentary

videos as well as early peer feedback and EC/LP group (n=13) who received the expert

commentary videos as well as later peer feedback. bNo peer feedback group indicates EC/NP

group who received the expert commentary videos only.

RQ3-3. Revision effects of timing of the peer feedback across sessions (EC/EP vs. EC/LP)

Hypothesis 3-3. There will be significant differences in the quality of students’ initial and

revised clinical decisions between groups with early peer feedback (EC/EP) and with later peer

feedback (EC/LP) across sessions.

In order to test the hypothesis, the initial and revised clinical decision responses of all

participants in the EC/EP and EC/LP were collected. A total of four different kinds of scores on

Page 119: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

104

the quality of the clinical decision were calculated: scores on the quality of the three sub-

dimensions (case assessment, prioritization of issues and objectives, and plan of an immediate

action) and overall averages of the three sub-dimensions scores.

RQ3-3-1. On the overall quality of the decision across sessions (EC/EP vs. EC/LP).

Hypothesis 3-3-1. There will be significant differences in the overall quality of the initial and

revised clinical decisions between the early peer feedback group (EC/EP) and later peer feedback

group (EC/LP) in case-based learning across sessions.

Descriptive statistics on the quality of the initial and revised clinical decisions between

EC/EP and EC/LP across sessions are summarized in Table 4-38.

Table 4-38

Descriptive statistics on the overall quality of the initial and revised clinical decisions between

EC/EP and EC/LP across two sessions

Overall

Session 1 Session 2

Initial Revised Initial Revised

M SD M SD M SD M SD

EC/EPa (n=9) 47.50 13.95 61.04 11.14 55.13 15.43 58.48 13.03

EC/LPb (n=13) 38.72 15.53 49.58 14.07 60.27 12.36 66.41 7.78

Note. aEC/EP indicates the group who received the expert commentary videos as well as early

peer feedback. bEC/LP indicates the group who received the expert commentary videos as well

as later peer feedback.

To examine whether the differences between the initial and revised decisions by EC/EP

and EC/LP were significant across sessions, a repeated-measures ANOVA with two within

variables (the initial and revised decision-making responses and two sessions) and one between

variable (group) was conducted (see Table 4-39). Also, the interaction effects for revision by

Page 120: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

105

group effects, revision by session, and revision by session by group effects were considered to

test the hypothesis.

The results showed that there were no statistically significant effects for the revision and

group interaction (F = .000, p > .05) and the three-way interaction among the revision, session,

and group (F = .674, p > .05). This result indicates that there was no significant difference in the

overall quality of the initial and revised clinical decisions between EC/EP and EC/LP across the

two sessions.

Page 121: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

106

Table 4-39

Summary of repeated-measures ANOVA for the quality of the initial and revised case assessment

in EC/EP and EC/LP

Source Sum of

Squares

df Mean

square

F p η2

Within subjects

Revisiona 3053.851 1 3053.851 20.404 .000 .505

Revisiona x GroupC

b .033 1 .033 .000 .988 .000

Error (Revisiona) 2993.417 20 149.671

Sessionc 1255.528 1 1255.528 11.080 .003 .357

Sessionc x GroupC

b 737.627 1 737.627 6.510 .019 .246

Error (Session) 2266.288 20 113.314

Revisiona x Session

c 590.593 1 590.593 5.006 .037 .200

Revisiona x Session

c x GroupC

b 79.482 1 79.482 .674 .421 .033

Error (Revisiona x Session

c) 2359.356 20 117.968

Between subjects

Intercept 127022.171 1 127022.171 790.001 .000 .975

GroupAb 34.211 1 34.211 .213 .650 .011

Error 3215.745 20 160.787

Note. aRevision indicates the initial and revised decision-making scores

bGroupC indicates the

two different groups— expert commentary with early peer feedback group (EC/EP) and expert

commentary with later peer feedback group (EC/LP). cSession indicates the two sessions

comprising of two Decision Points—The first session was comprised of DP2 and 3, and the

second session was comprised of DP4 and 5.

RQ3-3-2. On the quality of case assessment across sessions (EC/EP vs. EC/LP).

Hypothesis 3-3-2. There will be significant differences in the quality of the initial and revised

Page 122: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

107

case assessment between the early peer feedback group (EC/EP) and later peer feedback group

(EC/LP) across sessions.

Descriptive statistics on the quality of the initial and revised assessment between early

peer feedback group (EC/EP) and later peer feedback groups (EC/LP) across sessions are

summarized in Table 4-40. Since the hypothesis 3-3 assuming significant differences between

initial and revised clinical decisions between early peer feedback group and later peer feedback

group was denied, the follow-up analysis to compare the differences in the quality of case

assessment, one of the three-dimensions, was not conducted.

Table 4-40

Descriptive statistics on the quality of the initial and revised case assessment between early peer

feedback group and later peer feedback group across two sessions

Assessment

Session 1 Session 2

Initial Revised Initial Revised

M SD M SD M SD M SD

EC/EPa (n=9) 40.11 17.59 52.83 19.45 61.28 15.89 62.33 14.86

EC/LPb (n=13) 28.04 18.17 37.08 26.44 63.35 12.49 64.15 12.75

Note. aEC/EP indicates the group who received the expert commentary videos as well as early

peer feedback. bEC/LP indicates the group who received the expert commentary videos as well

as later peer feedback.

RQ3-3-3. On the quality of prioritization of issues and objectives across sessions

(EC/EP vs. EC/LP). Hypothesis 2-3-3. There will be significant differences in the quality of the

initial and revised prioritization of issues and objectives between the early peer feedback group

(EC/EP) and later peer feedback group (EC/LP) across sessions.

Descriptive statistics on the quality of the initial and revised assessment between early

peer feedback group (EC/EP) and later peer feedback groups (EC/LP) across sessions are

Page 123: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

108

summarized in Table 4-40. Since the hypothesis 3-3 assuming significant differences between

initial and revised clinical decisions between early peer feedback group and later peer feedback

group was denied, the follow-up analysis to compare the differences in the quality of

prioritization of issues and objectives, one of the three-dimensions, was not conducted.

Table 4-41

Descriptive statistics of the quality of the initial and revised prioritization of issues and

objectives between early peer feedback group and later peer feedback group across two sessions

Prioritization

Session 1 Session 2

Initial Revised Initial Revised

M SD M SD M SD M SD

EC/EPa (n=9) 37.50 13.71 53.44 9.62 59.39 16.46 56.72 14.39

EC/LPb (n=13) 36.00 19.66 48.39 18.68 61.39 11.91 67.31 11.70

Note. aEC/EP indicates the group who received the expert commentary videos as well as early

peer feedback. bEC/LP indicates the group who received the expert commentary videos as well

as later peer feedback.

RQ3-3-4. On the quality of plan of an immediate action across sessions (EC/EP vs.

EC/LP). Hypothesis 2-3-4. There will be significant differences in the quality of the initial and

revised plan of an immediate action between the early peer feedback group (EC/EP) and later

peer feedback group (EC/LP) in case-based learning across sessions.

Descriptive statistics on the quality of the initial and revised assessment between early

peer feedback group (EC/EP) and later peer feedback groups (EC/LP) across sessions are

summarized in Table 4-40. Since the hypothesis 3-3 assuming significant differences between

initial and revised clinical decisions between early peer feedback group and later peer feedback

group was denied, the follow-up analysis to compare the differences in the quality of plan of an

immediate action, one of the three-dimensions, was not conducted.

Page 124: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

109

Table 4-42

Descriptive statistics on the quality of the initial and revised plan of an immediate action

between early peer feedback group and later peer feedback group across two sessions

Plan

Session 1 Session 2

Initial Revised Initial Revised

M SD M SD M SD M SD

EC/EPa (n=9) 64.89 21.34 76.83 12.60 44.72 20.53 56.39 17.46

EC/LPb (n=13) 52.12 18.18 63.27 16.48 56.08 21.33 67.77 17.45

Note. aEC/EP indicates the group who received the expert commentary videos as well as early

peer feedback. bEC/LP indicates the group who received the expert commentary videos as well

as later peer feedback.

RQ4. Transfer effects

Research Question 4. Does the participation in the scaffolded revision activities affect

students’ transferred clinical decision-making skills?

o Research Question 4-1. Are there significant differences in the scores on a

transferred clinical decision test among EC/NP (expert commentary only), EC/EP

(expert commentary and early peer feedback), and EC/LP (expert commentary

and later peer feedback)?

o Research Question 4-2. Are there significant differences in the scores on a

transferred clinical decision test between groups with peer feedback (EC/EP and

EC/LP) and without peer feedback (EC/NP)?

o Research Question 4-3. Are there significant differences in the scores on a

transferred clinical decision test between groups with early peer feedback

(EC/EP) and with later peer feedback (EC/LP)?

Page 125: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

110

The fourth research question examines whether the scaffolded revision activities (expert

commentary and/or peer feedback) affects students’ transferred clinical decision-making skills.

To test students’ transferred clinical decision-making skills, 6 multiple-choice clinical decision-

making questions were developed and distributed upon the completion of the case-based online

learning module. The transferred test scores of all participants from the three groups were

collected.

Table 4-43

Data used to test the transferred effects of the scaffolded revision activities (Research Question 4)

Session 1 Session 2 Across Sessions 1 & 2 Session 3

Initial Revised Initial Revised Initial Revised Transfer

test

EC/NPa - - - - - - EC-3

EC/EPb - - - - - - EP-3

EC/LPc - - - - - - LP-3

Total - - - - - -

Note. aEC indicates the group who received the expert commentary videos only.

bEC/EP

indicates the group who received the expert commentary videos as well as early peer feedback. cEC/LP indicates the group who received the expert commentary videos as well as later peer

feedback.

RQ4-1. Transferred effect by groups (EC/NP vs. EC/EP and EC/LP)

Hypothesis 4-1. There will be significant differences in the scores on a transferred

clinical decision test among the expert commentary only group (EC/NP), expert commentary

with early peer feedback group (EC/EP), and expert commentary with later peer feedback group

(EC/LP).

In order to test the hypothesis 4-1, all participants’ performances in the transferred

clinical decision test were collected. Descriptive statistics on the scores on the transfer test from

Page 126: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

111

the EC/NP, EC/EP, and EC/LP are summarized in Table 4-44. The highest score of the transfer

test was 6.

Table 4-44

Descriptive statistics on the scores on the transfer test among EC/NP, EC/EP, and EC/LP

Group

N

Transfer Test

M SD

EC/NPa 24 5.29 0.62

EC/EPb 9 5.56 0.53

EC/LPc 11 4.91 1.04

Total 44 5.25 0.75

Note. aEC/NP indicates the group who received the expert commentary videos only.

bEC/EP

indicates the group who received the expert commentary videos as well as early peer feedback. cEC/LP indicates the group who received the expert commentary videos as well as later peer

feedback.

To examine whether there is significant difference between three groups, a one-way

ANOVA was conducted. The results showed that there was no statistically significant effect for

the performance on the transfer test among EC/NP, EC/EP, and EC/LP (F = 2.005, p > .05).

Page 127: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

112

Table 4-45

Summary of One-way ANOVA for the scores on the transfer test EC/NP, EC/EP, and EC/LP

Source Sum of

Squares

df Mean

square

F p η2

Corrected Model 2.160a 2 1.080 2.005 .148 .089

Intercept 1018.772 1 1018.772 1890.916 .000 .979

GroupAb 2.160 2 1.080 2.005 .148 .089

Error 22.090 41 .539

Total 1237.000 44

Corrected Total 24.250 43

Note. aR Squared = .089 (Adjusted R Squared = .045).

bGroupA indicates the three different

groups— expert commentary only group (EC/NP), expert commentary with early peer feedback

group (EC/EP), and expert commentary with later peer feedback group (EC/LP).

RQ4-2. Transferred effect of the peer feedback (EC/NP vs. EC/EP and EC/LP)

Hypothesis 4-2. There will be significant differences in the scores on a transferred

clinical decision test between peer feedback groups (EC/EP and EC/LP) and no peer feedback

group (EC/NP).

Descriptive statistics on the scores on the transfer test from the participants in the peer

feedback groups (EC/EP and EC/LP) and no peer feedback group (EC/NP) are summarized in

Table 4-46. The highest score of the transfer test was 6.

Page 128: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

113

Table 4-46

Descriptive statistics of the scores on the transfer test between the peer feedback groups (EC/EP

and EC/LP) and no peer feedback group (EC/NP)

Group

N

Transfer Test

M SD

Expert Commentary only (EC) 24 5.29 0.62

Expert Commentary with Peer feedback (EC/EP & EC/LP) 20 5.20 0.89

Total 44 5.25 0.75

To examine whether there is significant difference between two groups, one-way

ANOVA was conducted. The results showed that there was no statistically significant effect for

the performance on the transfer test between groups with peer feedback (EC/EP and EC/LP) and

without peer feedback (EC/NP) (F = .159, p > .05).

Table 4-47

Summary of One-way ANOVA for the scores on the transfer test between groups with peer

feedback (EC/EP and EC/LP) and without peer feedback (EC/NP)

Source Sum of Squares df Mean square F p η2

Corrected Model .092a 1 .092 .159 .692 .004

Intercept 1200.819 1 1200.819 2087.660 .000 .980

GroupBb .092 1 .092 .159 .692 .004

Error 24.158 42 .575

Total 1237.000 44

Corrected Total 24.250 43

Note. aR Squared = .004 (Adjusted R Squared = -.020).

bGroupB indicates two different

groups—peer feedback groups (EC/EP and EC/LP: Expert commentary with early or later peer

feedback) and no peer feedback group (EC/NP: Expert commentary only).

Page 129: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

114

RQ4-3. Transferred effects of the timing of the peer feedback (EC/EP vs. EC/LP)

Hypothesis 4-3. There will be significant differences in the scores on a transferred

clinical decision test between early peer feedback group (EC/EP) and later peer feedback group

(EC/LP).

Descriptive statistics on the scores on the transfer test from the participants in the early

peer feedback groups (EC/EP) and later peer feedback group (EC/LP) are summarized in Table

4-48. The highest score of the transfer test was 6.

Table 4-48

Descriptive statistics of the scores on the transfer test between EC/EP and EC/LP

Group

N

Transfer Test

M SD

EC/EPa 9 5.56 0.53

EC/LPb 11 4.91 1.04

Total 44 5.20 0.89

Note. aEC/EP indicates the group who received the expert commentary videos as well as early

peer feedback. bEC/LP indicates the group who received the expert commentary videos as well

as later peer feedback.

To examine whether there is significant difference between two groups, an one-way

ANOVA was conducted. The results showed that there was no statistically significant effect for

the performance on the transfer test between the early peer feedback (EC/EP) and the later peer

feedback groups (EC/LP) (F = .159, p > .05).

Page 130: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

115

Table 4-49

Summary of One-way ANOVA for the scores on the transfer test between groups with early peer

feedback (EC/EP) and with later peer feedback (EC/LP)

Source Sum of

Squares

df Mean

square

F p η2

Corrected Model 2.069a 1 2.069 2.836 .109 .136

Intercept 542.069 1 542.069 743.051 .000 .976

GroupCb 2.069 1 2.069 2.836 .109 .136

Error 13.131 18 .730

Total 556.000 20

Corrected Total 15.200 19

Note. aR Squared = .136 (Adjusted R Squared = .088).

b bGroupC indicates the two different

groups— expert commentary with early peer feedback group (EC/EP) and expert commentary

with later peer feedback group (EC/LP).

RQ5. Student perception on the revision experiences

Research Question 5. What are the students’ perceptions on the revision activity in the

case-based online learning module?

o Research Question 5-1. What are the students’ perceptions on the expert

commentary for revising their initial clinical decisions?

o Research Question 5-2. What are the students’ perceptions on the peer feedback

for revising their initial clinical decisions?

o Research Question 5-3. What are the students’ perceptions on the effectiveness of

the peer feedback compared to the expert commentary?

In order to collect data on how students participated in the scaffolded revision activity

(expert commentary and/or peer feedback session) and how they valued their experience with the

scaffolded revision activity, an online survey and face-to-face interviews were conducted.

Page 131: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

116

As the online survey was embedded in the closing Decision Point, the students who

finished all Decision Points were able to answer the survey. The survey was conducted

anonymously to gain more honest responses from the participants. Respondents were asked to

select the scaffolded revision activity they participated in: expert commentary only group (EC),

expert commentary with early peer feedback group (EC/EP), or expert commentary with later

peer feedback group (EC/LP). The participants in EC group were asked to express their

perceptions of the learning experiences with expert commentary. The participants in either

EC/EP group or EC/LP group were asked to express their perceptions of the learning experiences

with expert commentary as well as with peer feedback. They were also asked to compare the

helpfulness of the scaffolded revision activity between the expert commentary and peer feedback.

Face-to-face interviews were conducted in December, upon the completion of the module.

Since the purpose of the interviews was to explore students’ learning experiences with peer

feedback in detail, interviewees were recruited from either EC/EP group or EC/LP group only.

A total of three female interviewees volunteered to share their learning experiences: one was

from the EC/EP group, and the other two interviewees were from the EC/LP group.

RQ5-1. Student perception on the expert commentary

Survey results. To examine the participants’ perceptions of the scaffolded revision

activity with expert commentary, two multiple-choice questions and the follow-up open-ended

essay questions were asked. The brief results of the two multiple-choice questions are

summarized in Table 4-50. As described in the table, the participants showed positive attitudes

toward the scaffolded revision activity with expert commentary, with an average of 4.46.

Page 132: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

117

Table 4-50

Means and standard deviations of the items about student perceptions on the scaffolded-revision

activity with expert commentary

Item

EC/NPa

(n=80)

EC/EPb

(n=9)

EC/LPc

(n=13)

Total

M (SD) M (SD) M (SD) M (SD)

1. I think the self­revision activity was

helpful.

4.34

(0.67)

4.33

(0.50)

4.23

(0.73)

4.32

(0.66)

2. I think the self­revision activity helped me

reflect on the gaps between experts’ opinions

and mine.

4.64

(0.56)

4.44

(0.53)

4.46

(0.52)

4.60

(0.55)

Overall 4.49 4.38 4.35 4.46

Note. aEC/NP indicates the group who received expert commentary videos only.

bEC/EP

indicates the group who received expert commentary videos with early peer feedback. cEC/LP

indicates the group who received expert commentary videos with later peer feedback.

On the follow-up question asking the reason why they thought the scaffolded revision

activity with expert commentary was helpful, especially in reflecting on the gaps between their

opinions and those of the experts, some participants reported that it was good to hear from

experts before revising their answers. It seemed that the video enabled the participants not only

“to still state [their] own thought process,” but also “to make sure [they] had all the facts straight

in [their] logic.” Some participants responded that the expert commentary videos enabled them

to “solidify points that [they] learned in class,” “realize what details [they] were missing related

to the lesson,” “reinforce it by adding it to [their] decision,” and “see [the experts’] clinical

application.” The participants also thought the comparison between their opinions and those of

the experts was helpful to “give [them] reassurance that [they were] on the right track.” With

this matter, a participant responded, “it made me more confident in my answers if there was a

question I was unsure about, or to go back and revise if there was something I didn't understand

Page 133: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

118

correctly.” Furthermore, they thought the expert commentary videos helped them “identify areas

in [their] thought process that [they] may have potentially left out” and “understand how [their]

reasoning needs to be improved.”

RQ5-2. Student perception on the peer feedback

Survey results. To examine the participants’ perceptions of the scaffolded revision

activity with peer feedback, five multiple-choice questions and their follow-up open-ended essay

questions were distributed. The brief results of the five multiple-choice questions are

summarized in Table 4-51. As described in the table, the participants showed positive responses

toward the peer feedback session, with an average of 3.72. In particular, the group who received

later peer feedback (EC/LP) was more positive (M = 4.03) than the group who received early

peer feedback (EC/EP) (M = 3.27).

Page 134: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

119

Table 4-51

Means and standard deviations of the items about student perceptions on the scaffolded revision

with peer feedback

Item EC/EPa

(n=9)

EC/LPb

(n=13)

Total

M (SD) M (SD) M (SD)

1. I think the peer feedback activity was

helpful.

3.11 (1.17) 4.15 (0.56) 3.73 (0.99)

2. I think the peer feedback activity helped me

reflect on the gaps between experts’ opinions

and mine.

2.89 (1.27) 3.85 (0.80) 3.45 (1.10)

3. I think the question prompts were helpful. 3.33 (1.00) 3.62 (1.12) 3.50 (1.06)

4. I think providing feedback was

helpful/meaningful.

3.44 (1.13) 4.23 (0.60) 3.91 (0.92)

5. I think receiving feedback was

helpful/meaningful.

3.56 (1.13) 4.31 (0.63) 4.00 (0.93)

Total 3.27 4.03 3.72

Note. aEC/EP indicates the group who received expert commentary videos with early peer

feedback. bEC/LP indicates the group who received expert commentary videos with later peer

feedback.

On the follow-up questions asking the reason why they thought the peer feedback session

was helpful, especially in reflecting on the gaps between their opinions and those of the experts,

the participants reported that listening to other classmates’ opinions on the same problem was

helpful. In particular, the participants valued “different ideas and new ways of looking at

things.” For example, a participant reported “[my] peers can help explain their interpretations of

the experts opinions in a different but very relatable way.” Also, they mentioned that discussing

their responses with peers enabled them to “gain confidence in [their] answers, and communicate

and solidify [their] thoughts relative to these cases” as well as “fill in [their] knowledge gaps.”

Page 135: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

120

However, several participants thought that the additional involvement of the experts in

the peer feedback session would have been more beneficial. They felt discussions with peers

only had some limitations. For example, they reported that “there were some points [they] still

[weren’t] clear on,” and peers were not helpful, because they “all interpreted the questions

differently.” Also, some participants addressed they had difficulties in focusing on the module,

“because everyone else was around.” A participant expressed two hours were not enough to

complete the two Decision Points.

On the question asking the effectiveness of the provided question prompts, some

participants reported that the question prompts were helpful, especially in leading them “as to

what [the experts] wanted [them] to discuss” as well as in helping them “organize information.”

They believed that the prompts were “an accurate representation of certain clinical decisions that

one will have to make in practice.”

However, some participants did not find the reflective prompts were helpful, because

they were too general and redundant (e.g., “They were fairly repetitive or maybe too general, and

I found myself answering repetitively.”). Also, some participants mentioned they never used the

prompts and “just discussed [their] rationales and read over each other's work to make sure it

read the way [they] intended it to.”

On the question of the effectiveness of providing and/or receiving feedback, the

respondents did not differentiate the effectiveness between providing and receiving feedback.

Instead, they expressed their general attitudes toward having a feedback session among peers.

Some respondents showed positive attitudes toward peer feedback: they reported “discussing the

case with others helps [them] learn and retain information better.” Specifically, they believed

peer feedback helped them “explain [their] opinion,” “give a rationale for [their] choice,”

Page 136: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

121

“reconsider some of [their] points and logic,” “correct anything that wasn’t clear to other people,”

“refine communication skills,” and “gain confidence in [their] answers.”

Other respondents, on the other hand, reported that the peer feedback “really didn’t

change much” for them. They valued the expert opinions over the peer feedback in that “[they]

had the expert opinions so [they] knew to an extent the "right answer" before [they] were getting

or giving any peer feedback.” Due to this reason, they “didn't result in anyone changing their

answers really.”

To improve the peer feedback activity, several suggestions were made by the participants.

First, many respondents wished that “it would have been nice to get a different expert opinion on

certain aspects.” Specifically, they said “a little more structure or open conversation led by a

teacher during the feedback sessions,” such as “allowing clinicians to at least take questions and

choose whether or not to answer” would have been better. Second, having a bigger-sized group

discussion was suggested. They thought “a whole table discussion instead of paired off” would

allow more discussion. Third, a participant suggested that “[having] peer feedback BEFORE

listening to the experts” would have been better. The participant said “[it] would be more

helpful if instead of watching the videos and then discussing, [they] discuss the case itself with

clinicians and [their] peers, similar to rounds.”

Interview results. First, they were asked to describe the reason why they decided to

voluntarily participate in the peer feedback session. Interviewee A participated in the session,

because “it helps [her] work through case better.” She said, “[e]ven though I could go through it

faster by myself, I feel like sometimes I don’t retain information well when I do that. So I can

talk to someone else hear their ideas too, it helps me retain things better.” Another reason why

she chose to participate in the session was that she wanted to see how other students study with

Page 137: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

122

the module. Interviewees B and C participated in the session, because they thought there would

be a chance to ask questions to the experts.

The peer review session wasn’t quite what I thought it was going to be. I guess [the

instructor] was more involved. She was going to bring up important points. I thought she

was going to guide us as the experts did in the videos, which doesn’t really make sense

because that’s why the expert videos are there. So she doesn’t have to do that. But I

thought she was going to stimulate discussions. It was more read someone else’s

assessment and point out for them what the differences (Interviewee C, Lines 268-273).

On the follow-up question on activities they expected before attending the session, all

three participants mentioned that they expected interactions with the experts to some extent.

They seemed to expect that the experts would “help direct [their] thinking” and answer questions

that some participants might have. For example, interviewee A responded she might want to

“talk with the professors about medical decisions, maybe that would’ve given [her] more

direction in what to put.” Interviewee B mentioned that she might have asked experts the most

perplexing questions, like case assessment—selecting two or three most important cues out of

five or six cues.

The participants then were asked to describe how they provide and receive feedback with

their partner. Interviewee A said she and her partner mostly talked about the patient and the

medical case itself. Interviewee B responded that she showed her responses to her partner and

the partner provided feedback accordingly. However, both respondents mentioned that they did

not provide or receive feedback much. Interviewee C and her partner also showed their own

Page 138: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

123

responses and compared each other’s responses. She said the partner’s responses were helpful

for reminding her of the points she missed. While exchanging the responses, she wrote notes for

her and added the points she missed during the revision time. She said, however, her initial

responses would have been much better if she had enough time to write. She said, “[I] was

trying to type so fast,” and she forgot to write down although she thought of it.

We talked about like if the question had to do with, what we would, what the goals were

for the patient, just hearing what they had to say about the goals, they thought they were

important and I contribute what I thought, which goals I thought were important. If they

had a good argument for one of their goals that I hadn’t written down, if I agree with

them after discussing it then I would include that goal. Or maybe if I had a goal the same

as one they had but maybe if they had expanded on the information a little more on their

answer, then I would think about it more and include the more information about it

(Interview A, Line 116-122).

We swapped and had our partner read ours and give us suggestions. Initially, I think

throughout the entire thing, mine were pretty long descriptions as you can see. My

partner’s were very like short. So when she was reading mine, she was like “I don’t

really have many things for you to add or change, because you wrote so much.” She did

make a couple of suggestions, but… So for the revising, I didn’t change that much. I

might’ve changed a sentence or two. I didn’t change very much (Interviewee B, Line

272-277).

Page 139: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

124

I was thinking like, “Oh, man. I knew. I thought of this. I was trying to type so fast, and

I forgot to write it down. I thought of it, but I forgot to write it down.” Then when we

come to the revision section, I would like to try to add the things I’ve forgotten. I wrote

notes for myself while we were correcting each other. “I forgot to write this. I forgot to

write this.” And I went back to add it during the revision time (Interviewee C, Lines 308-

313).

The interviewees thought they did not have much feedback to provide each other,

because they have “the same mindset and the same thought processes” by experiencing the same

courses together. For this reason, the interviewees suggested it would have been helpful if they

had a chance to receive feedback from experts who “might have whole different side point of

view than [them] who are just focused on thinking about what [they]’ve learned in this course

and how that relates.”

On the question whether they revised much during the peer feedback session, interviewee

A said she revised her initial decision much based on her peer’s feedback and her insights while

exchanging feedback with her partner. On the other hand, interviewee B did not revise very

much. She said, “I wanted to say and then after I listened to the experts, I was like, I addressed

all the same things that they addressed.” She also did not identify herself as a group studier, so

she was not able to concentrate on the module as much as she did the module alone at home.

She also thought her partner was not focused either, so “[their] suggestions [were] a little bit

superficial.” She also said, “we were might have been not as honest as what we thought that

would change them what should happened.”

Page 140: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

125

On the follow-up question asking their opinions about anonymous peer feedback session,

interviewee B and C showed positive reactions to it, because they thought people would give

more honest and deeper feedback.

When you were sitting with someone and talking about it, you want to agree with what

they wrote ‘cause you don’t want them to feel bad because what they wrote was valid. I

feel like definitely you don’t want someone else to feel bad about what you’re critiquing

them. So, yes, I do think that had it been anonymous thing, you could’ve addressed on

your own time, you would’ve get more feedback (Interviewee C, Lines 351-355).

On the question of comparing the value of providing or receiving feedback for each other,

interviewee B found providing feedback more valuable. She addressed that receiving feedback

did not help her think about her own opinion during that time, while providing feedback inspired

her to think about how she would change her partner’s responses, which eventually resulted in

reflecting back on to her own responses.

RQ5-3. Student perception on the helpfulness of the scaffolded revision activities

Survey results. To examine the participants’ perceptions on the effectiveness of the

expert commentary and peer feedback, three open-ended essay questions were used. On the

question which scaffolded activity was more helpful for them to revise their answers, eight

participants out of nine in the early peer feedback group (EC/EP) preferred the expert

commentary videos to peer feedback, whereas nine participants out of 13 in the later peer

feedback session (EC/LP) preferred peer feedback to the expert commentary only.

Page 141: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

126

The participants, who preferred the expert commentary only, responded that the expert

commentary videos were enough for them to gain insights how to form, reinforce or change their

own decisions. Some participants even mentioned, “The peer feedback really didn’t bring up

any points I hadn't thought about or that hadn't been addressed by the expert opinions.” In

particular, the participants believed that receiving feedback from peers with the same level of

training as them was not helpful. Several participants said, “At this point in our career we have

all taken the same courses and formed very similar opinions based on that, but did not consider

everything that the experts did when it came to making decisions,” “When I had questions or

comments none of us knew the answers.” In addition to this, several participants expressed that

they were more focused when studying alone rather than studying with peers. They seemed to

“like to take time to think, search through books and research to try to come up with the best

answer.” In the peer review, on the other hand, several participants seemed to rush into

answering everything.

The participants who preferred peer feedback activity to the expert commentary only

mentioned that they were able to read their own responses more closely with their peers. They

also found the peer feedback activity meaningful in that they were able to get out of their

comfort zone and see the same case from a different perspective (e.g., “I liked the peer feedback

more since I only have a limited perspective and limited number of ideas when I talk to myself.

Talking to another people helps me see new things.”). Also, they valued the discussion with

peers who were in the same level of training in that “it allowed for discussion.” “[The peer

feedback activity] helped to assist me in the thought process behind answering the questions

about the case. The self-revision activity was helpful in correcting my mistakes during my initial

interpretation of the case, but that was better for learning the correct material as opposed to the

Page 142: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

127

process of reaching that correct answer.” Last, the participants thought the peer feedback

activity allowed them to “come prepared and confident and get more out of the session.”

Interview results. The three participants in the interviews were asked to compare their

learning experiences between the peer feedback activity and those in the self-revision activity.

Interviewee A participated in the first peer feedback session, and interviewee B and C

participated in the second peer feedback session.

They were asked to compare the effectiveness of the peer feedback activity and the self-

revision activity. Interviewee A preferred peer feedback activity, and interviewees B and C

preferred self-revision activity. Interviewee A said, “Even though it went faster when I was

doing by myself, because I had to stop to talk to someone else (in the peer feedback session), I

feel like, seeing how someone else thinks about a problem versus how I do was helpful.”

Interviewees B and C found the self-revision more helpful to them. They both mentioned

that they were more focused when no one was around. Although interviewee B said, “I guess the

peer session might’ve been easier to do, because I had other people’s opinions too. It gave me a

little bit more confident or something. We both had the same thought processes. That kind of

thing made me feel easier,” she added, “It was definitely much more helpful sit at home and do

this on my own, look up the resources that I need to look up, not talk to anybody about it.”

Interviewee C also valued the self-revision activity over the peer feedback activity. She seemed

to be less coherent in her writing during the peer feedback session, because she was rushing

herself into writing to keep pace with her partner. She said, “If I was doing by myself, I

would’ve been thinking more thoroughly what I was saying and writing sentences full sense.

These probably make sense, but I feel like I was probably writing so quickly that I might’ve

missed the point. I might’ve not made the point.”

Page 143: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

128

When the interviewees were asked to grade their revised answers in the peer feedback

session and those in the self-revision activity, interviewee A expected that the quality of the

revised responses in the peer feedback session was higher, while interviewee B and C expected

the revised responses in the self-revision activity were better. Interviewee A thought, “because I

had more information from, to include from my partner and maybe different things I forgot from

class to include in there.” Interviewees B and C, on the other hand, expected that they were

doing better in the self-revision activity, because they were able to have more in-depth and

thorough thinking.

Then, they were asked to rank the helpfulness of the expert commentary videos and peer

feedback session in enhancing their revision. All three interviewees believed that the expert

commentary videos were most helpful in facilitating their revision. Interviewee A put the peer

feedback activity close to the expert videos “because getting to talk to other people helps it to

cement things in [her] brain better,” while other interviewees ranked the peer feedback activity

as not helpful, because they did not identify themselves as a “group studier.”

Page 144: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

129

CHAPTER 5

CONCLUSION

In this study, the assumption was made that promoting veterinary students’ knowledge

application and reflection would enhance their clinical decision-making skills. The case-based

online learning module was developed in order to promote the students’ knowledge application,

and the scaffolded revision activities were designed to promote their reflection on their thinking

and actions.

With the case-based online learning module and scaffolded revision activities, the

students in this study were expected to make a series of clinical decisions (referred to as initial

clinical decision) and then revise the decisions (referred to as revised clinical decision). In the

initial decision-making activity, the students were asked to watch the case videos, identify and

analyze the problems, and make a decision with an aid of critical thinking prompts. In the

revision of the decision, they received a chance to compare their opinions to those of experts or

of peers. The expert commentary videos narrating expert veterinarians’ decision-making

approaches and their own decisions were provided to all participants. In addition, peer feedback

was provided in two separate sessions, and the students were allowed to participate in either or

none of the sessions. Thus, all participants were allowed to participate in one of the three diverse

scaffolded revision activities: expert commentary with no peer feedback (EC/NP), expert

commentary with early peer feedback (EC/EP), or expert commentary with later peer feedback

(EC/LP). This study examined whether the quality of the students’ revised decisions was

Page 145: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

130

significantly enhanced after participating in one of the scaffolded revision activities and

identified which scaffolded revision activity was most helpful.

A brief summary of the results and discussions related to the results are described and

explained according to research questions in this chapter. Also implications of the study and

suggestions for future research are also discussed.

Summary of the Findings

This research explored five research questions to examine the effects of the scaffolded

revision activities on veterinary students’ clinical decision-making skills. Specifically, the first

three research questions examined the gain effects of the revision activities, the fourth research

question examined the transfer effects of the revision activities, and the last research question

explored the students’ perception on the scaffolded revision activities.

Research Question 1. Gain Effect—Revision Effect

The first research question tested whether the quality of the students’ revised clinical

decisions was significantly enhanced after experiencing the scaffolded revision activities. The

results indicated that the qualitative changes between the initial and the revised clinical decisions

were statistically significant within the three sub-dimensions of case assessment, prioritization of

issues and objectives, and plan of an immediate action.

Research Question 2. Gain Effect—Revision x Group Effect

After verifying the significant differences in the quality of the initial and revised clinical

decisions, the second research question tested the quality of the initial and revised clinical

decisions among the groups based on the three diverse scaffolded revision activities. The first

comparison was between each group of EC/NP, EC/EP, and EC/LP being compared to one

another on the basis of the quality of the initial and revised clinical decisions. The second

Page 146: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

131

comparison was between the EC/NP group to the other two groups, EC/EP and EC/LP, with the

addition of peer feedback. Lastly, the third comparison is between the two peer feedback groups,

EC/EP and EC/LP, to each other to see how the timing of the peer feedback affected the quality

of the initial and revised clinical decisions.

RQ2-1. Significant enhancement among EC/NP, EC/EP, and EC/LP. RQ2-1 tested if

there is a statistically significant difference in the quality of the initial and revised clinical

decisions among the groups with expert commentary only (EC/NP), with expert commentary and

early peer feedback (EC/EP), and with expert commentary and later peer feedback (EC/LP). The

results indicated that there was no statistically significant difference among the groups.

RQ2-2. Significant enhancement between peer feedback group (EC/EP and EC/LP)

vs. no peer feedback group (EC/NP). RQ2-2 tested if there is a significant difference in the

quality of the initial and revised clinical decisions between the groups with peer feedback

(EC/EP and EC/LP) and the group without peer feedback (EC/NP). The results indicated that

indicated that the differences between two groups were not significant.

RQ2-3. Significant enhancement between early peer feedback (EC/EP) and later

peer feedback (EC/LP). RQ2-3 tested if there is a significant difference in the quality of the

initial and revised clinical decisions between the group with early peer feedback (EC/EP) and the

group with later peer feedback (EC/LP). The results indicated that the differences between two

groups were not significant.

Research Question 3. Gain Effect—Revision x Group x Session Effect

The third research question examined whether or not there were any significant

differences in the quality of the initial and revised clinical decisions among the groups when

viewed and compared in two separate sessions. Similar to the second research question, the first

Page 147: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

132

comparison is between each group of EC/NP, EC/EP, and EC/LP being compared to one another

on the basis of the quality of the initial and revised clinical decisions. Next, the second

comparison is between the EC/NP group to the other two groups, EC/EP and EC/LP, with the

addition of peer feedback. Then the third comparison is between the two peer feedback groups,

EC/EP and EC/LP, to each other to see how the timing of the peer feedback affected the quality

of the initial and revised clinical decisions. Finally, each of the above three comparison sets are

compared across two separate sessions.

RQ3-1. Significant enhancement among EC/NP, EC/EP, and EC/LP across sessions.

RQ3-1 tested if there is a significant difference in the quality of the initial and revised clinical

decisions among the groups with expert commentary only (EC/NP), with expert commentary and

early peer feedback (EC/EP), and with expert commentary and later peer feedback (EC/LP)

across the two sessions. The results indicated that the differences among groups across the

sessions were not significant.

RQ3-2. Significant enhancement between peer feedback group (EC/EP and EC/LP)

vs. no peer feedback group (EC/NP) across sessions. RQ3-2 tested if there is a significant

difference in the quality of the initial and revised clinical decisions between the groups with peer

feedback (EC/EP and EC/LP) and the group without peer feedback (EC/NP) across the two

sessions. The results indicated that the differences between groups across the sessions were not

significant.

RQ3-3. Significant enhancement between early peer feedback (EC/EP) and later

peer feedback (EC/LP) across sessions. RQ3-3 tested if there is a significant difference in the

quality of the initial and revised clinical decisions between the group with early peer feedback

Page 148: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

133

(EC/EP) and the group with later peer feedback (EC/LP) across the two sessions. The results of

indicated the differences between groups across the sessions were not significant.

Research Question 4. Transfer Effect

The fourth research question examined whether there was any significant difference in

the students’ performance on the transfer test. The transfer test comprised of six multiple-choice

questions with text-based cases of digestive diseases and patients’ medical records. The students

were asked to interpret data, identify the problems, and make a clinical decision for the

patient. In order to test the transfer effect of the scaffolded revision activities, three comparison

sets were identified. The first comparison is between each group of EC/NP, EC/EP, and EC/LP

being compared to one another on the basis of the performance on the transfer test. Next, the

second comparison is between the EC/NP group to the other two groups, EC/EP and EC/LP,

with the addition of peer feedback. Then the third comparison is between the two peer feedback

groups, EC/EP and EC/LP, to each other to see how the timing of the peer feedback affected the

students’ performance on the transfer test.

RQ4-1. Transfer effects among EC/NP, EC/EP, and EC/LP. RQ4-1 tested if there is a

significant difference in the performance on the transfer test among the groups with expert

commentary only (EC), with expert commentary and early peer feedback (EC/EP), and with

expert commentary and later peer feedback (EC/LP) across the two sessions. The results

indicated that the difference among group was not significant.

RQ4-2. Transfer effects between peer feedback group (EC/EP and EC/LP) vs. no

peer feedback group (EC/NP). RQ4-2 tested if there is a significant difference in the

performance on the transfer test between the groups with peer feedback (EC/EP and EC/LP) and

Page 149: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

134

the group without peer feedback (EC/NP). The results indicated that the difference between the

two groups was not significant.

RQ4-3. Transfer effects between early peer feedback (EC/EP) and later peer

feedback (EC/LP). RQ4-3 tested if there is a significant difference in the performance on the

transfer test between the groups with early peer feedback (EC/EP) and the group with later peer

feedback (EC/LP). The results indicated that the difference between the two groups was not

significant.

Research Question 5. Students’ Perception

The fifth research question examined the students’ perception on the revision activity

using expert commentary and peer feedback. To explore the students’ perception, an online

survey and face-to-face interviews were conducted.

RQ5-1. Perception on the expert commentary videos. The participant’s perception on

the scaffolded revision activity with the expert commentary videos was positive in general. The

participants mentioned that the expert videos solidify what they have learned in class and helped

them learn how experts use the knowledge in real practices. Also, they acknowledged that

comparing their opinions and those of experts was a great opportunity for them to check whether

their clinical decision-making process is correct. Moreover, the participants believed that the

expert commentary was meaningful for them to identify their weak areas in their thought process

that need to be improved.

RQ5-2. Perception on the peer feedback. The participants’ perception on the

scaffolded revision activity with peer feedback was positive in general. The participants, in

particular, believed that peer feedback allowed them to experience different but very relatable

interpretations on the experts’ opinions. Also, comparing their own opinions to those of

Page 150: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

135

colleagues was helpful to retain information and fill in the knowledge gaps. Also, they

mentioned that they were able to reconsider some of their points, and thus, better explain and

give a rationale for their choices, which resulted in gaining confidence and refining

communication skills.

To some participants, however, peer feedback did not help their revision

activity. Because these peers share similar levels of knowledge based on what they have learned

together in similar courses, these individuals rarely had differing opinions. For this reason,

several participants expressed that it would have been more beneficial to them if the experts had

also been involved in the peer feedback sessions and guided their discussions, specifically on

what to discuss as well as how to discuss.

RQ5-3. Comparison between expert commentary only and with peer feedback.

Concerning the perception of which revision activity was more helpful, the majority of the

participants in the EC/EP group who received early peer feedback preferred the expert

commentary only, whereas the majority of the participants in the EC/LP group who received

later peer feedback preferred the expert commentary with peer feedback.

The participants who preferred the revision activity with expert commentary only felt that

the expert commentary videos were enough to promote their revision. They explained that their

peers might have limitations in providing meaningful feedback to stimulate revision. They also

mentioned that they had difficulties in focusing on the learning module due to having other

people physically there, which contrasted sharply with the individualistic atmosphere of an

online learning environment. The participants who preferred the revision activity with expert

commentary and peer feedback, on the other hand, believed that discussions with peers provoked

deeper critical thinking and honed their communication skills.

Page 151: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

136

Effects of the case-based online learning module and scaffolded revision

Veterinarians’ clinical decision-making skills can be improved upon when they engage in

critically thinking about how to apply their academic knowledge and subsequently, make an

informed decision. The results of the current study showed that learning with the case-based

online module having students revisit and revise their decision-making processes has the

potential to improve their clinical decision-making skills. This section discusses the

interpretation of the effects of the expert commentary as well as the effects of peer feedback.

Then the rest of the section discusses the effects between the expert commentary and peer

feedback as well as early peer feedback to later peer feedback.

Effects of the Expert Commentary

The quantitative results of this study showed that the expert commentary could enhance

the students’ clinical decision-making skills within the three sub-areas of case assessment,

prioritization of issues and objectives, and plan of an immediate action. Consistent with previous

research (e.g., Croskerry & Nimmo, 2011; Gielen, Peeters, Dochy, Onghena, & Struyven, 2010;

Gielen, Tops, Dochy, Onghena, & Smeets, 2010; Mory, 2004; Pedersen & Liu, 2002), expert

commentary could increase the accuracy of the decision. For example, in a study by Pedersen

and Liu (2002), expert modeling impacted the quality of students’ reasoning and their rationale

for their decision. In order to make a rationale clinical decision, the students in this study were

supposed to develop proficiency in utilizing knowledge they have learned in classes, making a

decision, and revising the decision throughout their work in the case-based learning module. In

this way, the expert commentary might provide the students with reliable feedback which

enables them to appreciate, understand, and correct errors, which, in turn, results in better

decision making (Croskerry & Nimmo, 2011).

Page 152: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

137

In addition, the qualitative results from the students’ online survey and face-to-face

interviews provided more insights in what ways students could possibly benefit from the expert

commentary videos. For example, participants in this study thought the expert commentary

videos helped them solidify what they had learned in class and guided them in learning how the

acquired knowledge from school could be utilized to solve real problems. To elaborate, this

study showed that expert commentary could benefit students by helping them acquire and apply

knowledge similar to previous research (e.g., Pedersen & Liu, 2002).

Some participants also felt that the expert commentary videos facilitated further

reflection upon their decision-making process as well as provided a check upon whether their

clinical decisions were on the correct track. This benefit is consistent with previous findings that

feedback, especially when it comes from experts, can play a role as a standard of performance,

which allows learners to compare their actual performance (Butler & Winne, 1995; Mory, 2003,

2004; Winne & Hadwin, 1998).

Additionally, participants mentioned that the expert commentary videos allowed them to

identify weak areas in their decision-making reasoning. This result may indicate that expert

opinions, as external feedback, could state explicitly whether the decisions and performance of

the learner were adequate as well as model appropriate decisions (Butler & Winne, 1995; Mory,

2003, 2004; Winne & Hadwin, 1998). Also, the result of this study may support that external

feedback could provide individuals with opportunities to broaden and deepen their perspective

(van den Boom, Paas, & van Merriënboer, 2007).

Effects of the Peer Feedback

The quantitative results of this study failed to support the hypothesis that the combined

use of peer feedback and expert commentary would be more effective in enhancing clinical

Page 153: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

138

decision-making skills than the independent use of expert commentary only. From the online

survey and the face-to-face interviews, however, participants felt that peer feedback helped them

retain knowledge better (Johnson & Johnson, 1993; Mory, 2004; Van Lehn et al., 1995) by

allowing them to communicate their thoughts with peers (Fischer, Kollar, Stegmann & Wecker,

2013; Kolodner, 2007; Sessa et al., 2011).

The effects of peer feedback were found to be not statistically effective, which seems to

be derived from the fact that the actual peer feedback sessions were different from the ideal

situation. An ideal situation for peer feedback must have students compare their ideas, exchange

constructive and suggestive feedback, and use as well as evaluate evidence (van der Pol, van den

Berg, Admiraal, & Simons, 2008). Specifically in this study, the participants mentioned that

they did not provide nor receive constructive and suggestive feedback from peers. The lack of

constructive and suggestive peer feedback could stem from learner characteristics and their

familiarity to the task. Additionally, the small sample size might explain why the results were

not statistically significant.

In terms of learner characteristics, the participants’ lack of knowledge and inexperience

would have hindered their ability to provide constructive and suggestive peer feedback. Due to

the participants’ lack of knowledge and inexperience within the veterinary field, the accuracy of

the feedback provided by peers could be lower than that of the feedback by experts (Gielen, Tops,

et al., 2010). Furthermore, the participants limited domain knowledge might cause them to be

unfamiliar with asking productive questions or elaborating on ideas and thoughts (Ge & Land,

2003). Understandably, the peer feedback within this study was less direct and concrete than if it

had been under ideal situations (Cromley & Azevedo, 2005; van den Boom et al., 2007).

Page 154: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

139

In terms of familiarity to the task, the participants found the given task predictable due to

the fact that the case-based learning module dealt with a typical canine digestive disease, which

the participants had already learned. To elaborate, when the context of a problem is familiar to a

student, the possible course of actions tend to be relatively predictable within their decision-

making process (Shin, Jonassen, & McGee, 2003). This familiarity might discourage students

from regulating their decision-making process—planning, monitoring, and reflecting—which is

thought to be a strong predictor of successful problem solving (Shin et al., 2003).

In terms of the small sample size, it is statistically proven that the size of the sample

affects the significance of a test statistic (Eng, 2003; Field, 2013). As the descriptive results

indicated, there was a trend that the EC/EP group who received expert commentary as well as

early peer feedback had higher scores on the quality of the decisions than the other groups. Thus,

increasing the sample size might increase the statistical power that would result in significant

interaction effects. To elaborate, it is recommended to have at least 20 to 30 observations in

each cell in ANOVA/MANOVA to obtain robust statistical results (Geweke & Singleton, 1980;

Lilliefors, 1967; Machiko R. Tomita, 2006; Ntoumanis & Myers, 2016; Swanson & Holton,

2005).

Effects of the Timing of the Peer Feedback

The combined use of peer feedback and expert commentary was not statistically more

effective in enhancing clinical decision-making skills than the independent use of expert

commentary only. Additionally, the quantitative data of this study indicated that the differences

between the participants in the early peer feedback group and those in the later peer feedback

group were not statistically significant. Thus, there was no effect upon having the participants

exchange feedback in either early or later peer feedback sessions.

Page 155: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

140

Implications of the Study

Designing learning activities that enhance students’ clinical decision-making skills

requires a thoughtful approach. This study may offer possibilities for teaching clinical decision-

making skills by promoting students’ knowledge application and reflection. In the following

section, the implications of the scaffolded revision activities in the case-based online learning

module for knowledge application and reflection are discussed.

Implications for Enhancing Knowledge Application

The results of this study showed that revision activities in case-based learning could have

the potential to bridge the gaps between theory and practice. This study may suggest two

implications that educators and instructional designers need to consider to promote students’

knowledge application: providing authentic cases and an entire cycle of decision-making.

This study may support the importance of providing authentic cases, which include real-

world challenges (Choi et al., 2013). Authentic cases with real-world challenges are believed to

lessen the gaps between theories and reality by providing an opportunity for students to

determine what resources and information they need, how to use them, and how to perform a

given task using them (J. S. Brown et al., 1989). To bolster the realism of the clinical cases, one

promising way could be the involvement of multiple players in the decision-making process

(Higgs & Jones, 2008; Orasanu & Connolly, 1993; Smith et al., 2008; Terry & Higgs,

1993). The current learning module, for example, described the owner’s financial concerns,

which is one of the characteristics that frequently influence a doctor’s decision-making

(Vandeweerd et al., 2012b).

This study may also support the importance of providing an entire cycle of decision-

making process, which is believed to help students acquire a broad range of problem-solving

Page 156: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

141

skills (Williams, 1992). Many studies have focused on a specific phase of an entire cycle of

decision-making, such as diagnostic decision (e.g., Croskerry & Nimmo, 2011; Cutrer et al.,

2013; Elstein, Schwartz, & Schwarz, 2002). The current study, however, provided a case-based

online learning module that allowed students to experience an entire cycle of clinical decision-

making, including diagnostic, therapeutic, and prognostic decision-making.

Implications for Promoting Reflection

Consistent with other studies (e.g., Croskerry & Nimmo, 2011; Epstein, 1999; Jones,

1992; Mamede & Schmidt, 2004, 2005), the findings of this study may support the importance of

reflection in enhancing students’ clinical decision-making skills. In this study, the participants

received opportunities to make their own decisions and revise their initial decisions. In other

words, the results may indicate that encouraging students to step back from their problem

situation and reflect on their thinking process is effective in enhancing the quality of the clinical

decisions.

In addition, the findings of this study may support previous studies that have shown that

having students compare their own opinions with those of their peers enhances their clinical

decision-making skills (e.g., Croskerry & Nimmo, 2011; Gielen, Peeters, Dochy, Onghena, &

Struyven, 2010; Gielen, Tops, et al., 2010). As indicated in other previous studies (Anderson,

1987; Pirolli & Anderson, 1985), being exposed to experts’ decision-making process can benefit

students, since most students approach problem solving by referring to known examples or

developing abstract declarative rules that guide their problem solving. The participants in this

study compared their opinions with those of experts and/or those of their peers, and the results

might indicate that expert commentary has the potential to improve their clinical decision-

making skills.

Page 157: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

142

Suggestions for future research

This study focused on the use of scaffolded revision activities in case-based learning with

third-year veterinary students to enhance their clinical decision-making skills. The results of this

study suggest several important directions for future research on case-based learning, peer

feedback, and clinical decision-making.

Testing with unfamiliar cases

This study presented the enhanced clinical decision-making skills with familiar cases.

The familiarity to the case might reduce students’ perceived difficulty level of the case, and thus,

undermine the importance of the scaffold revision activities for clinical students (Pedersen & Liu,

2002). Therefore, future studies may consider using unfamiliar cases to test the effects of case-

based learning on improving veterinary students’ clinical decision-making performance.

Providing sufficient scaffoldings to guide interactions

Consistent with previous research (Ge & Land, 2003), this study suggests that simply

providing reflective prompts might be insufficient for successfully guiding a peer feedback

session. According to the peer feedback groups’ anecdotal data, participants did not use the

reflective prompts productively. Thus, further studies may consider additional strategies, such as

instructor’s monitoring or guidance, to scaffold a peer feedback session. To elaborate, the

instructor can help students with asking their peers questions, elaborating or explaining their

thoughts, constructing arguments, or providing constructive and suggestive feedback (Ge &

Land, 2003).

Testing the generalizability of the results of this study

The participants in this study had a chance to self-select to join one of the three types of

scaffolded revision activities—EC/NP, EC/EP, or EC/LP—which resulted in unequal

Page 158: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

143

distribution with small group sizes. Thus, it is necessary to replicate the same study with a

random sample assignment and an increased sample size to ensure the generalizability of the

results. Further research is needed to confirm the generalizability of the results.

Testing on the transferred clinical decision-making skills

This study tested both gain effects and transfer effects of the scaffolded revision

activities. However, the term between the gain tests and the transfer test was approximately one

or two weeks. From an educational standpoint, this one- to two-week gap may not be strong

enough to show the training effect of the scaffolded revision activities. Moreover, clinical

decision-making skills are rather long-term skills. In order to test the far-transferred effects of

the scaffolded revision activities and the case-based learning module, future studies may

consider using a longer gap period between interventions and transfer tests to see whether or not

an individual is indeed able to make a better clinical decision.

Page 159: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

144

REFERENCES

Anene, B. M. (2013). Clinical decision making in veterinary practice. Nigerian Veterinary

Journal, 34(4), 877–882. Retrieved from http://etheses.nottingham.ac.uk/2051/

Banning, M. (2007). A review of clinical decision making: Models and current research. Journal

of Clinical Nursing, 17(2), 187–195. doi:10.1111/j.1365-2702.2006.01791.x

Barrows, H. S., & Tamblyn, R. M. (1980). Problem-based learning: An approach to medical

education. New York: Springer.

Borleffs, J. C. C., Custers, E. J. F. M., van Gijn, J., & ten Cate, O. T. J. (2003). “Clinical

reasoning theater”: a new approach to clinical reasoning education. Academic Medicineine,

78(3), 322–325. doi:10.1097/00001888-200303000-00017

Boud, D., & Walker, D. (1998). Promoting reflection in professional courses: The challenge of

context. Studies in Higher Education, 23(2), 191–206.

doi:10.1080/03075079812331380384

Brown, A. L. (1987). Metacognition, executive control, self-regulation, and other more

mysterious mechanisms. In F. E. Weinert & R. H. Kluwe (Eds.), Metacognition, motivation,

and understan (pp. 65–115). HUlsdale, NJ: Lawrence Erlbaum Associate.

Brown, J. S., Collins, A., & Duguid, P. (1989). Situated cognition and the culture of learning.

Educational Researcher, 18(1), 32–42.

Cameron, S., & Turtle-song, I. (2002). Learning to write case notes using the SOAP format.

Journal of Counseling & Development, 80(3), 286–292. doi:10.1002/j.1556-

6678.2002.tb00193.x

Page 160: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

145

Campanella, M., & Lygo-Baker, S. (2014). Reconsidering the lecture in modern veterinary

education. J Vet Med Educ, 41(2), 138–45.

Chaiken, S., & Trope, Y. (1999). Dual-process theories in social psychology. New York:

Guilford Press.

Choi, I. (2009). A case-based e-learning framework for real-world problem solving : implications

for human resources development. Journal of Korean HRD Research, 4(1), 81–100.

Choi, I., Hong, Y.-C., Park, H., & Lee, Y. (2013). Case-based learning for anesthesiology:

Enhancing dynamic decision-making skills through cognitive apprenticeship and cognitive

flexibility. In R. Luckin, S. Puntambekar, P. Goodyear, B. Grabowski, J. Underwood, & N.

Winters (Eds.), Handbook of design in educational technology (pp. 230–240). New York,

NY: Routledge.

Choi, I., & Lee, K. (2009). Designing and implementing a case-based learning environment for

enhancing ill-structured problem solving: Classroom management problems for prospective

teachers. Educational Technology Research and Development, 57, 99–129.

doi:10.1007/s11423-008-9089-2

Choi, I., Lee, S. J., & Kang, J. (2009). Implementing a case-based e-learning environment in a

lecture-oriented anaesthesiology class: Do learning styles matter in complex problem

solving over time? British Journal of Educational Technology, 40(5), 933–947.

doi:10.1111/j.1467-8535.2008.00884.x

Choo, S. S. Y., Rotgans, J. I., Yew, E. H. J., & Schmidt, H. G. (2011). Effect of worksheet

scaffolds on student learning in problem-based learning. Advances in Health Sciences

Education, 16, 517–528. doi:10.1007/s10459-011-9288-1

Cockcroft, P. D. (2007). Clinical reasoning and decision analysis. Veterinary Clinics of North

Page 161: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

146

America - Small Animal Practice, 37(3), 499–520. doi:10.1016/j.cvsm.2007.01.011

Cohen, J. (1988). Statistical power analysis for the behavioral sciences. New Jersey: Lawrence

Erlbaum Associates Inc. Publishers.

Collins, A., Brown, J. S., & Holum, A. (1991). Cognitive apprenticeship: Making thinking

visible. American Educator, 15(3), 6 – 11, 38–46.

Collins, A., Brown, J. S., & Newman, S. E. (1987). Cognitive apprenticeship: Teaching the craft

of reading, writing, and mathematics. Champain, Illinois.

Collyer, S. C., & Malecki, G. S. (1998). Tactical decision making under stress: History and

overview. In J. A. Cannon-Bowers & E. Salas (Eds.), Making decisions under stress:

Implications for individual and team training (pp. 3–15). Washington, DC: American

Psychological Association. doi:10.1080/01402389908425338

Cromley, J. G., & Azevedo, R. (2005). What do reading tutors do? A naturalistic study of more

and less experienced tutors in reading. Discourse Processes, 40(2), 83–113.

doi:10.1207/s15326950dp4002

Croskerry, P. (2009). A universal model of diagnostic reasoning. Academic Medicine : Journal

of the Association of American Medical Colleges, 84(8), 1022–1028.

doi:10.1097/ACM.0b013e3181ace703

Croskerry, P., & Nimmo, G. R. (2011). Better clinical decision making and reducing diagnostic

error. Journal of the Royal College of Physicians of Edinburgh, 41(2), 155–162.

doi:10.4997/JRCPE.2011.208

Croskerry, P., & Norman, G. (2008). Overconfidence in clinical decision making. American

Journal of Medicine, 121(5 SUPPL.), 24–29. doi:10.1016/j.amjmed.2008.02.001

Cutrer, W. B., Sullivan, W. M., & Fleming, A. E. (2013). Educational strategies for improving

Page 162: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

147

clinical reasoning. Current Problems in Pediatric and Adolescent Health Care, 43(9), 248–

257. doi:10.1016/j.cppeds.2013.07.005

Davis, E. A. (2003). Prompting middle school science students for productive reflection: Generic

and directed prompts. Journal of the Learning Sciences, 12(1), 91–142.

doi:10.1207/S15327809JLS1201_4

Davis, E. A., & Linn, M. C. (2000). Scaffolding students’ knowledge integration: prompts for

reflection in KIE. International Journal of Science Education, 22(8), 819–837.

doi:10.1080/095006900412293

Eddy, D. M. (1990). Clinical decision making: from theory to practice. JAMA : The Journal of

the American Medical Association, 263(13), 1839–1841. doi:10.1001/jama.263.13.1839

Elstein, A. S., Schwartz, A., & Schwarz, A. (2002). Clinical problem solving and diagnostic

decision making: selective review of the cognitive literature. British Medical Journal, 324,

729–732. doi:10.1136/bmj.324.7339.729

Eng, J. (2003). Sample size estimation: how many individuals should be studied? Radiology,

227(2), 309–313. doi:10.1148/radiol.2272012051

Epstein, R. M. (1999). Mindful practice. JAMA : The Journal of the American Medical

Association, 282(9), 833–9. doi:10.1001/jama.282.9.833

Ertmer, P. A., & Russell, J. D. (1995). Using case studies to enhance instructional design

education. Educational Technology, 35(4), 23–31.

Evans, J. S. B. T. (2003). In two minds: Dual-process accounts of reasoning. Trends in Cognitive

Sciences, 7(10), 454–459. doi:10.1016/j.tics.2003.08.012

Evans, J. S. B. T., & Over, D. E. (1996). Rationality and reasoning. Hove, UK: Psychology

Press.

Page 163: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

148

Field, A. (2013). Discovering statistics using IBM SPSS statistics. (M. Carmichael, Ed.) (Kindle

Edi). SAGE Publications.

Fletcher, O. J., Hooper, B. E., & Schoenfeld-Tacher, R. (2015). Instruction and curriculum in

veterinary medical education: A 50-year perspective. Journal of Veterinary Medical

Education, 42(5), 489–500. doi:10.3138/jvme.0515-071

Flynn, A. E., & Klein, J. D. (2001). The influence of discussion groups in a case-based learning

environment. Educational Technology Research and Development, 49(3), 71–86.

Ge, X., & Land, S. M. (2003). Scaffolding students’ problem-solving processes in an ill-

structured task using question prompts and peer interactions. Educational Technology

Research and Development, 51(1), 21–38. doi:10.1007/BF02504515

Gee, J. P. (1997). Thinking, learning, and reading: The situated sociocultural mind. In D.

Kirshner & J. A. Whitson (Eds.), Situated cognition: Social, semiotic, and psychological

perspectives (pp. 235–260). Mahwah, NJ: Lawrence Erlbaum Associates.

Geweke, J. F., & Singleton, K. J. (1980). Interpreting the likelihood ratio statistic in factor

models when sample size is small. Journal of the American Statistical Association, 75(369),

133–137. doi:10.2307/2287400

Gielen, S., Peeters, E., Dochy, F., Onghena, P., & Struyven, K. (2010). Improving the

effectiveness of peer feedback for learning. Learning and Instruction, 20(4), 304–315.

doi:10.1016/j.learninstruc.2009.08.007

Gielen, S., Tops, L., Dochy, F., Onghena, P., & Smeets, S. (2010). A comparative study of peer

and teacher feedback and of various peer feedback forms in a secondary school writing

curriculum. British Educational Research Journal, 36(1), 143–162.

doi:10.1080/01411920902894070

Page 164: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

149

Hammond, K. R. (1996). Copng with uncertainty: The rivalry between intuition and analysis. In

K. R. Hammond (Ed.), Human judgment and social policy: Inrreducible uncertainty,

inevitable error, unavoidable injustice (pp. 60–93). New York: Oxford University Press.

Hansen, W. F., Ferguson, K. J., Sipe, C. S., & Sorosky, J. (2005). Attitudes of faculty and

students toward case-based learning in the third-year obstetrics and gynecology clerkship.

American Journal of Obstetrics and Gynecology, 192(2), 644–647.

doi:10.1016/j.ajog.2004.10.595

Harasym, P. M. H. J. A. W. W. (1997). Helping students to learn to think like experts when

solving clinical problems. Academic Medicine, 72(3), 173–179.

Harbison, J. (1991). Clinical decision making in nursing. Journal of Advanced Nursing, 16(4),

404–407. doi:10.1111/j.1365-2648.1991.tb03429.x

Hardin, L. E. (2003a). Problem-solving concepts and theories. Journal of Veterinary Medical

Education, 30(3), 226–229.

Hardin, L. E. (2003b). Research in medical problem solving: a review. Journal of Veterinary

Medical Education, 30(3), 230–235. doi:10.3138/jvme.30.3.230

Higgs, J., & Jones, M. A. (2008). Clinical decision making and multiple problem spaces. In J.

Higgs, M. Jones, S. Loftus, & Nicole Christensen (Eds.), Clinical reasoning in the health

professions (3rd Editio, p. Chapter 1). Retrieved from amazon.com

Jenkins, H. M. (1985). Improving clinical decision making in nursing. The Journal of Nursing

Education, 24(6), 242–243.

Jonassen, D. H. (1997). Instructional design models for well-structured and ill-structured

problem-solving learning outcomes. Educational Technology Research and Development,

45(1), 65–94.

Page 165: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

150

Jonassen, D. H. (2010). Research Issues in Problem Solving. In The 11th International

Conference on Education Research: New Educational Paradigm for Learning and

Instruction (pp. 1–15).

Jonassen, D. H., & Hernandez-Serrano, J. (2002). Case-based reasoning and instructional design:

Using stories to support problem solving. Educational Technology Research and

Development, 50(2), 65–77. doi:10.1007/BF02504994

Jonassen, D. H., & Hung, W. (2008). All problems are not equal: Implications for problem-based

learning. Interdisciplinary Journal of Problem-Based Learning, 2(2), 10–13.

doi:10.7771/1541-5015.1080

Jones, M. a. (1992). Clinical reasoning in manual therapy. Physical Therapy, 72(12), 875–884.

doi:1454863

Kahneman, D., & Frederick, S. (2005). A model of heuristic judgment. In K. J. Holyoak &

Robert G. Morrison (Eds.), The Cambridge handbook of thinking and reasoning (pp. 267–

294). New York, NY: Cambridge University Press.

Kassirer, J. P. (2010). Teaching clinical reasoning: case-based and coached. Academic

Medicine : Journal of the Association of American Medical Colleges, 85(7), 1118–1124.

doi:10.1097/ACM.0b013e3181d5dd0d

Khosa, D. K., Volet, S. E., & Bolton, J. R. (2014). Making clinical case-based learning in

veterinary medicine visible: Analysis of collaborative concept-mapping processes and

reflections. Journal of Veterinary Medical Education, 41(4), 406–417.

doi:10.3138/jvme.0314-035R1

King, L., & MacLeod, M. (2002). The role of intuition and the development of expertise in

surgical ward and intensive care nurses. Journal of Advanced Nursing, 37, 322–329.

Page 166: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

151

Kitchener, K. S. (1983). Cognition, metacognition, and epistemic cognition: a three-level model

of cognitive processing. Human Development, 4, 222–232.

Klein, G. A. (1993). A recognition-primed decision (RPD) model of rapid decision making. In G.

A. Klein, J. Orasanu, R. Calderwood, & C. E. Zsambok (Eds.), Decision making in action:

Models and methods (pp. 138–147). Norwood, NJ: Ablex Publishing Corporation.

Klein, G. A. (2008). Naturalistic decision making. Human Factors, 50(3), 456–460.

doi:10.1518/001872008X288385.

Klein, G. A., & Klinger, D. (1991). Naturalistic decision making. Human Systems IAC Gateway,

2(1), 16–19.

Kolodner, J. L., Hmelo, C. E., & Narayanan, N. H. (1996). Problem-Based Learning Meets Case-

Based Reasoning. In ICLS ’96 Proceedings of the 1996 international conference on

Learning sciences (pp. 188–195).

Kolodner, J. L., Owensby, J. N., & Guzdial, M. (2004). Case-based learning aids. In D. H.

Jonassen (Ed.), Handbook of Research for Education Communications and Technology

(2nd Editio, Vol. 2, pp. 829–861). Mahwah, NJ: Lawrence Erlbaum Associates. Retrieved

from http://www.aect.org/edtech/32.pdf

Ladyshewsky, R., & Jones, M. A. (2008). Peer coaching to generate clinical reasoning skills. In

J. Higgs, M. A. Jones, S. Loftus, & N. Christensen (Eds.), Clinical reasoning in the health

professions (3rd Editio). Elsevier Health Sciences. Retrieved from amazon.com

Land, S. M., & Zembal-Saul, C. (2003). Scaffolding reflection and articulation of scientific

explanations in a data-rich, project-based learning environment: An investigation of

progress portfolio. Educational Technology Research and Development, 51(4), 65–84.

doi:10.1007/BF02504544

Page 167: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

152

LeBoeuf, R. A., & Shafir, E. B. (2005). Decision making. In K. J. Holyoak & R. G. Morrison

(Eds.), The Cambridge handbook of thinking and reasoning (pp. 243–265). Cambridge

University Press.

Lilliefors, H. W. (1967). On the Kolmogorov-Smirnov test for normality with mean and variance

unknown. Journal of the American Statistical Association, 62(318), 399–402.

doi:10.1080/01621459.1967.10482916

Linn, M. (2000). Designing the knowledge integration environment. International Journal of

Science Education, 22(8), 781–796.

Lipshitz, R. (1993). Converging themes in the study of decision making in realistic settings. In

G. A. Klein, J. Orasanu, R. Calderwood, & C. E. Zsambok (Eds.), Decision making in

action: Models and methods (pp. 103–137). Norwood, NJ: Ablex Publishing Corporation.

Lipshitz, R., Klein, G. A., Orasanu, J., & Salas, E. (2001). Taking stock of naturalistic decision

making. Journal of Behavioral Decision Making, 14(5), 331–352. doi:10.1002/bdm.381

Machiko R. Tomita. (2006). Methods of analysis: from univariate to multivariate statistics. In G.

Kielhofner (Ed.), Research in occupational therapy: Methods of inquiry for enhancing

practice (pp. 243–280). PA: Philadelphia: F.A. Davis Company.

Mamede, S., & Schmidt, H. G. (2004). The structure of reflective in medicine. Medical

Education, 38(12), 1302–1308. doi:10.1111/j.1365-2929.2004.01917.x

Mamede, S., & Schmidt, H. G. (2005). Correlates of reflective practice in medicine. Advances in

Health Sciences Education, 10(4), 327–337. doi:10.1007/s10459-005-5066-2

Mann, K., Gordon, J., & MacLeod, A. (2009). Reflection and reflective practice in health

professions education: A systematic review. Advances in Health Sciences Education, 14(4),

595–621. doi:10.1007/s10459-007-9090-2

Page 168: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

153

Maudsley, G., & Strivens, J. (2000). Promoting professional knowledge, experiential learning

and critical thinking for medical students. Medical Education, 34(7), 535–544.

doi:10.1046/j.1365-2923.2000.00632.x

May, S. (2013). Clinical reasoning and case-based decision making: the fundamental challenge

to veterinary educators. Journal of Veterinary Medical Education, 40(3), 200–209.

doi:10.3138/jvme.0113-008R

McKenzie, B. (2014). Veterinary clinical decision-making: cognitive biases, external constraints,

and strategies for improvement. Journal of the American Veterinary Medical Association,

244(3), 271–276. doi:10.2460/javma.244.3.271

Moon, J. A. (2004). A handbook of reflective and experiential learning: Theory and practice.

Oxon, OX: RoutledgeFalmer.

Mory, E. H. (2004). Feedback research revisited. In D. H. Jonassen (Ed.), Handbook of research

on educational communications and technology (2nd Editio, pp. 745–783). Mahwah, NJ:

Lawrence Erlbaum Associates.

Ntoumanis, N., & Myers, N. D. (2016). An introduction to intermediate and advanced statistical

analyses for sport and exercise scientists. John Wiley & Sons.

Orasanu, J., & Connolly, T. (1993). The reinvention of decision making. In G. A. Klein, J.

Orasanu, R. Calderwood, & C. E. Zsambok (Eds.), Decision making in action: Models and

methods (pp. 3–20). Norwood, NJ: Alex Publishing Corporation.

Patel, V. L., Arocha, J. F., & Zhang, J. (2005). Thinking and reasoning in medicine. In K. J.

Holyoak & R. G. Morrison (Eds.), The Cambridge handbook of thinking and reasoning (pp.

281–286). Cambridge: Cambridge University Press.

Patton, D. D. (1978). Introduction to clinical decision making. Seminars in Nuclear Medicine,

Page 169: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

154

8(4), 273–282.

Pedersen, S., & Liu, M. (2002). The effects of modeling expert cognitive strategies during

problem-based learning. Journal of Educational Computing Research, 26(4), 353–380.

doi:10.2190/6NL3-HMED-J8HE-GD4T

Quintana, C., Zhang, M., & Krajcik, J. (2005). A framework for supporting metacognitive

aspects of online inquiry through software-based scaffolding. Educational Psychologist,

40(4), 235–244. doi:10.1207/s15326985ep4004_5

Rashotte, J., & Carnevale, F. a. (2004). Medical and nursing clinical decision making: a

comparative epistemological analysis. Nursing Philosophy : An International Journal for

Healthcare Professionals, 5(2), 160–174. doi:10.1111/j.1466-769X.2004.00175.x

Riegger, M. H. (2011, June). Using S.O.A.P. is good medicine. DVM360 Magazine, 1–2.

Retrieved from http://veterinarynews.dvm360.com/print/327899?page=full

Rogoff, B. (1990). Peer interaction and cognitive development. In Apprenticeship in thinking:

Cognitive development in social context (pp. 171–188). New York, NY: Oxford University

Press.

Sato, M. (2013). Beliefs about peer interaction and peer corrective feedback: Efficacy of

classroom intervention. Modern Language Journal, 97(3), 611–633. doi:10.1111/j.1540-

4781.2013.12035.x

Schon, D. A. (1983). The reflective practitioner: How professionals think in action. New York:

Basic Books.

Schon, D. A. (1988). From technical rationality to reflection-in-action. In J. A. Dowie & A. S.

Elstein (Eds.), Professional judgment: A reader in clinical decision making (pp. 60–77).

New York, NY: Cambridge University Press. doi:10.1097/00006247-198308000-00009

Page 170: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

155

Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental

designs for generalized causal inference. Belmont, CA: Wadsworth.

Shin, N., Jonassen, D. H., & McGee, S. (2003). Predictors of well-structured and ill-structured

problem solving in an astronomy simulation. Journal of Research in Science Teaching,

40(1), 6–33. doi:10.1002/tea.10058

Slovic, P., Finucane, M. L., Peters, E., & MacGregor, D. G. (2004). Risk as analysis and risk as

feelings. Risk Analysis, 24(2). Retrieved from papers2://publication/uuid/F4B3EA59-C2F7-

4F62-9223-701C27F152C2

Smith, M., Higgs, J., & Elizabeth Ellis. (2008). Factors influencing clinical decision making. In

J. Higgs, M. Jones, S. Loftus, & N. Christensen (Eds.), Clinical reasoning in the health

professions (3rd Editio). Retrieved from amazon.com

Song, H.-D., Grabowski, B. L., Koszalka, T. a., & Harkness, W. L. (2006). Patterns of

instructional-design factors prompting reflective thinking in middle-school and college level

problem-based learning environments. Instructional Science, 34(1), 63–87.

doi:10.1007/s11251-005-6922-4

Spiro, R. J., Coulson, R. L., Feltovich, P. J., & Anderson, D. K. (1988). Cognitive flexibility

theory: Advanced knowledge acquisition in ill-structured domains. Champain, Illinois.

Swanson, R. A., & Holton, E. F. (2005). Research in Organizations: Foundations and Methods

in Inquiry. San Francisco: Berrett-Koehler Publishers.

Terry, W., & Higgs, J. (1993). Educational programmes to develop clinical reasoning skills.

Australian Journal of Physiotherapy, 39(1), 47–51. doi:10.1016/S0004-9514(14)60469-4

Thistlethwaite, J. E., Davies, D., Ekeocha, S., Kidd, J. M., MacDougall, C., Matthews, P., …

Clay, D. (2012). The effectiveness of case-based learning in health professional education.

Page 171: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

156

Medical Teacher, 34(6), 421–444. doi:10.3109/0142159X.2012.680939

Thomas, S. A., Wearing, A. J., & Bennett, M. J. (1991). Clinical decision making for nurses and

health professionals. Sydney: W B Saunders/Ballière Tindall.

Thompson, C., & Dowding, M. (2002). Clinical Decision Making and Judgement in Nursing.

London: Churchill Livingstone.

Thurman, J., Volet, S. E., & Bolton, J. R. (2009). Collaborative, Case-based Learning: How Do

Students Actually Learn from Each Other? Journal of Veterinary Medical Education, 36(3),

297–304. doi:10.3138/jvme.36.3.297

Uribe, D., Klein, J. D., & Sullivan, H. (2003). The effect of computer-mediated collaborative

learning on solving III-defined problems. Educational Technology Research and

Development, 51(1), 5–19. doi:10.1007/BF02504514

van den Boom, G., Paas, F. G. W. C., & van Merriënboer, J. J. G. (2007). Effects of elicited

reflections combined with tutor or peer feedback on self-regulated learning and learning

outcomes. Learning and Instruction, 17(5), 532–548.

doi:10.1016/j.learninstruc.2007.09.003

van der Pol, J., van den Berg, B. a M., Admiraal, W. F., & Simons, P. R. J. (2008). The nature,

reception, and use of online peer feedback in higher education. Computers and Education,

51(4), 1804–1817. doi:10.1016/j.compedu.2008.06.001

Van Manen, M. (1991). Reflectivity and the pedagogical moment: the normativity of

pedagogical thinking and acting. Journal of Curriculum Studies, 23, 507–536.

Vandeweerd, J.-M., Vandeweerd, S., Gustin, C., Keesemaecker, G., Cambier, C., Clegg, P., …

Gustin, P. (2012a). Clinical reasoning and decision analysis. Journal of Veterinary Medical

Education, 39(2), 142–151. doi:10.3138/jvme.0911.098R1

Page 172: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

157

Vandeweerd, J.-M., Vandeweerd, S., Gustin, C., Keesemaecker, G., Cambier, C., Clegg, P., …

Gustin, P. (2012b). Understanding Veterinary Practitioners’ Decision-Making Process:

Implications for Veterinary Medical Education. Journal of Veterinary Medical Education,

39(2), 142–151. doi:10.3138/jvme.0911.098R1

Volet, S., Summers, M., & Thurman, J. (2009). High-level co-regulation in collaborative

learning: How does it emerge and how is it sustained? Learning and Instruction, 19(2),

128–143. doi:10.1016/j.learninstruc.2008.03.001

Whitney, M., Herron, M., & Weeks, B. (1993). Preclinical curricular alternatives: history and

rationale of problem- based medical education. J Vet Med Educ, 20, 2–8.

Williams, S. M. (1992). Putting case-based instruction into context: Examples from legal and

medical education. Journal of the Learning Sciences, 2(4), 367–427.

doi:10.1207/s15327809jls0204_2

Wojcikowski, K., & Brownie, S. (2013). Generic reflective feedback : An effective approach to

developing clinical reasoning skills. Journal of Computer Assisted Learning, 29, 371–382.

doi:10.1111/jcal.12012

Wood, D., Bruner, J. S., & Ross, G. (1976). The role of tutoring in problem solving. Journal of

Child Psychology and Child Psychiatry, 17, 89–100. Retrieved from

http://onlinelibrary.wiley.com/doi/10.1111/j.1469-7610.1976.tb00381.x/abstract

Yang, Y.-F. (2011). A reciprocal peer review system to support college students’ writing. British

Journal of Educational Technology, 42(4), 687–700. doi:10.1111/j.1467-

8535.2010.01059.x

Yates, J., Veinott, E., & Patalano, A. (2003). Hard decisions, bad decisions: On decision quality

and decision aiding. In S. L. Schneider & J. Shanteau (Eds.), Emerging Perspectives in

Page 173: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

158

Judgment and Decision Research (pp. 13–63). New York: Cambridge University Press.

Retrieved from http://works.bepress.com/andrea_patalano/18/

Page 174: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

159

APPENDIX A

PEER FEEDBACK GUIDELINES WITH REFLECTIVE PROMPTS

Page 175: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

160

Page 176: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

161

Page 177: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

162

Page 178: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

163

APPENDIX B

ONLINE SURVEY

Page 179: ENHANCING VETERINARY STUDENTS’ CLINICAL DECISION …

164