Upload
changezkn
View
245
Download
5
Tags:
Embed Size (px)
Citation preview
PN-562-02S 1
Introduction
Purpose
Perhaps the definitive description of critical care nursing practice was captured in
this statement by Benner, Hooper-Kyriakidis and Stannard (1999): "Critical care nursing
practice is intellectually and emotionally challenging because it requires quick judgments
and responses to life-threatening conditions where there are narrow margins for error" (p.
16). The gravity of this statement is realized by critical care educators who seek methods
that prepare nurses for the challenges facing them in clinical practice. In the last five
years, articles in the nursing literature have recommended simulation as a teaching
method for new critical care and medical-surgical nurses (Eaves & Flagg, 2001; Morton,
1997; Rauen, 2001; Weis & Guyton-Simmons, 1998; Vandry & Whitman, 2001). These
authors contend simulation, which can be defined as "an event or situation made to
resemble clinical practice as closely as possible to teach theory, assessment, technology,
pharmacology and skills" (Rauen, 2001, p. 96) can effectively teach critical thinking
(CT), clinical judgment (CJ), technical skills, and time management to nurses.
Articles have also appeared in the medical literature asserting simulation is an
effective teaching method (Gaba & DeAnda, 1988; Good et al., 1992; Gordon, 2000).
These articles suggest simulation accelerates the learning of basic skills and CJ. If these
conclusions are valid, simulation has the potential to revolutionize nursing education.
However, simulation is a very costly teaching method (Morton, 1996). The cost of
establishing up a simulation laboratory can range from $25,000 to $1 million (Jha,
Duncan & Bates, 2001). To justify such expenditures, the superiority of simulation must
PN-562-02S 2
be conclusive. Therefore, the purpose of this paper is to critique relevant literature
examining the efficacy of simulation in the development of CT and CJ skills to determine
if there is sufficient evidence to recommend its use for nursing education.
Significance of the issue
Nurse educators recognized many years ago that knowledge gained from
classroom instruction is not necessarily transferred to clinical practice (Wong, 1979).
Wisser (1974) suggested traditional lecture-based teaching methods stress the importance
of learning concepts and principles while failing to teach students to synthesize this
knowledge and relate it to clinical practice. In response to the recognized disconnection
between knowledge acquisition and clinical application, nurse researchers began to
examine the role of CT, in the learning process (Dobrzykowski, 1994; Facione &
Facione, 1996b; Maynard, 1996; Oermann, 1998, Tanner, 1987). The Delphi Research
Project sponsored by the American Philosophical Association published the following
consensus definition of CT: "We understand CT to be purposeful, self- regulatory
judgment which results in interpretation, analysis, evaluation, and inference, as well as
the explanation of the evidential, conceptual, methodological, criteriological, or
contextual considerations upon which that judgment is based" (Facione & Facione,
1996b). While an extensive discussion of the relationship of CT to nursing education is
beyond the scope of this paper, some knowledge of CT literature is essential if we are to
understand the role it plays in the concept of simulation as a teaching method.
Hundreds of articles have been published in nursing journals addressing the
concept of CT in nursing education. Despite this abundance of literature, ambiguity
PN-562-02S 3
remains as to whether traditional nursing education develops CT skills. In a landmark
review, Kintgen-Andrews (1991) examined 18 nursing studies and concluded strong
evidence linking nursing education to the development of CT abilities did not exist.
Additionally, Adams (1999) critiqued 20 quantitative nursing research studies on this
topic and concluded "there is no consistent evidence that nursing education contributes to
increasing the CT abilities of nursing students" (p. 116). One of significant issues cited
by Adams was the studies' failure to describe the teaching methods used, rendering them
of little value to educators attempting to evaluate the efficacy of methods used to teach
CT skills.
Despite the lack of clarity in the nursing literature regarding which methods best
facilitate acquisition of CT skills, the need to teach these skills to nurses continues to be a
major issue in nursing education driven by regulatory organizations, a complex health
care system and the nursing shortage. (Case, 1995; Elliot, 1996; Whiteside, 1997).
Nursing school accrediting organizations view the acquisition of CT skills as necessary to
build the core competencies of nurses (American Association of Colleges of Nursing,
1998). Therefore, there is now a clear mandate to develop CT skills throughout the
program's curriculum (National League for Nursing Accrediting Commission, 2001).
Additionally, the complex health care system has increased the demands placed upon
nurses in the last 20 years (Alfaro-LeFevre, 1999; Morton, 1997). Rapid advances in
technology, a larger knowledge base and the higher acuity of patients have made CT and
CJ core competencies in nursing practice (Alfaro-LeFevre, 1999; Alspach, 1995).
According to recent data, there were 126,000 nursing vacancies as of June 2001, and it is
PN-562-02S 4
projected that by 2020, the shortage of RNs will exceed 400,000 nationally (AACN,
2002). This has resulted in the need to train graduate RNs for high-risk specialty areas,
and this trend is expected to continue (Morton, 1997). Employers are pressuring
educational institutions to better prepare students for the challenges they will face in the
clinical setting (Morton, 1997). These factors demand nurse educators use teaching
methods that facilitate CT skills development. This leads to the question addressed in this
critical literature review: Has simulation been proven effective in teaching CT skills?
Organization of the Review
The topic of the efficacy of simulation as a teaching method evolved from reading
nursing literature examining the relationship of nursing education to the acquisition of CT
skills. Initial searches were conducted using Pub MED and CINAHL databases. The
purpose of these initial searches was to reveal all relevant concepts pertaining to the
relationship between nursing education and CT. The following search terms were used to
find relevant articles: CT and nursing education and CT and teaching methods, excluding
only those works not written in English. The Pub MED and CINAHL searches listed 525
and 213 articles respectively, although duplications existed. Article abstracts were
examined and relevant articles were obtained at the following libraries: UCSF Main
Library, UCSF Learning Resources Center, Holy Names College, Samuel Merritt
College, Stanford University, and the Kaiser Permenente Clinical Libraries. Efforts were
made to identify historical articles and books by examining article reference lists to
identify citations appearing in multiple works. Once identified, these works were
obtained. Reading was then done on the topic of CT in nursing education. As it became
PN-562-02S 5
clear this topic was too broad to be examined in a critical literature review, the focus was
refined to simulation and its relationship to the development of CT skills. Additional
searches conducted in CINAHL and Pub MED using the following search terms: CT and
simulation, simulation and teaching methods, simulation and education revealed an
additional 384 articles.
In selecting the material to be reviewed, every attempt was made to locate
quantitative research from the nursing literature. Quantitative articles from the medical
literature focusing on the use of simulation to teach CJ in the areas of medical education,
critical care, trauma and emergency medicine were included because of the similarities of
these settings to critical care nursing education.
In addition to the searches conducted, inquiries regarding the existence of any
unpublished quantitative research on the use of simulation as a teaching method were
sent to Patricia Gonce Morton, RN PhD and Carol Rauen RN, MS, CCRN. These
professors use simulation laboratories to teach nursing students at the University of
Maryland-Baltimore and Georgetown University respectively, and have published
articles on its use in the critical care literature. Replies were received indicating they
knew of no unpublished quantitative research on the topic.
Literature Review of the Knowledge Base
Non-research Literature /Theoretical Base
Many articles of varying quality exist in the nursing literature discussing the use
of simulation to teach CT. The non-research material included in this review describes
the theoretical base supporting the use of simulation as a method to teach CT. This
PN-562-02S 6
literature includes educational theories (e.g. adult learning theory and cognitive learning
theory) and work examining the link between CT, CJ and excellence in clinical nursing
practice. This section of the review will explore relevant works from these areas.
Adult Learning Theory and Cognitive Learning Theory
Adult learning theory (ALT) and cognitive learning theory (CLT) support the
concept of simulation as a method to teach CT and CJ. Common to ALT and CLT is that
the primary focus is on the learner rather than the teacher. The relationship of these
learning theories to simulation is discussed in this section.
Adult Learning Theory is useful in developing curriculum for critical care nurses
(Dobbin, 2001). Several adult learning theory principles (Bastable, 1997) support the use
of active experiential methods (e.g. simulation) to teach CT. These principles include: (a)
learning occurs when an immediate problem, need or deficit is identified; (b) learning is
centered on the person and problem; (c) the instructor's role is to facilitate learning; (d)
the learner is an active participant in the learning process; (e) the learner participates in
learning within a group; (f) application and timely feedback reinforce what is learned.
Cognitive Learning Theory also supports the use of simulation as a method to
teach CT. It proposes learning occurs through the individual's cognitive processes. The
learner's memory, thought processes, perception, and ways of structuring information are
instrumental to the process of learning (Bastable, 1997). Proponents of CLT see the
learner as an active participant in the process of acquiring knowledge and new skills
(Dobbin, 2001). It is expected that the participant will demonstrate the ability to apply
knowledge to situations encountered in the real world. The role of the instructor is to
PN-562-02S 7
engage participants in activities that encourage the discovery of the relationships between
new and existing knowledge. Dobbin (2001) describes the relationship between CT and
CLT as inextricable. To enhance concepts and understanding, she believes the instructor
should facilitate discussion through the use of examples and the illustration of analogies.
The Link Between Critical Thinking, Clinical Judgment, Excellence in Clinical
Nursing Practice and Simulation
Several nursing scholars have addressed the link between CT and CJ. Alfaro-
LeFevre (1999) asserts that CT and clinical reasoning skills are necessary to make CJs.
She emphasizes that the ability to make CJs comes from a "marriage of theoretical and
experiential knowledge" (p. 83). Facione & Facione (1996a) underscore the relationship
of CT to CJ in their statement "professional judgment requires CT" (p. 42). They urge
knowledge acquisition "be examined within a process framework that demands
theoretical connections between believed facts and practice observations" (Facione,
Facione & Sanchez, 1994, p. 349). Oermann (1998) notes CT ability is imperative when
the patient's problem is not clear or the appropriate intervention is not obvious. Del
Bueno (1983) observed that the nurse's knowledge of content and theory does not
necessarily translate into CJ at the bedside. She recommends that CT and CJ be taught
using multiple and diverse learning strategies such as experiential, simulated and
hypothetical methods. Schank (1990) believes that CT and CJ are best strengthened
through the utilization of teaching methods that focus on knowledge application, analysis,
synthesis and evaluation while allowing the learner to practice essential skills in an active
way. Simulation is a teaching method that allows for this type of active practice.
PN-562-02S 8
In 1984, Patricia Benner published a landmark work describing the role clinical
experience plays in increasing the nurse's competency towards expert practice. She
stresses the value of expert clinical teaching in the development of new nurses. She states
the expert clinician can present the neophyte nurse with "paradigm cases that transmit
more than can be conveyed through abstract principles or guidelines" (p. 8). In order for
the neophyte nurse to learn from the expert clinician's paradigm case, the neophyte must
"actively rehearse or imagine the situation" (p. 8). Benner sees simulations as an effective
way to teach because they "require actions and decisions from the learner" and can
"provide the learner with opportunities to gain paradigm cases in a guided way” (p. 9).
In 2001, Rauen outlined how simulation can be used across the novice to expert
continuum to develop CT skills. Novice nurses will benefit from simulation as they apply
didactic knowledge and integrate information while practicing skills. The advanced
beginner nurse benefits from simulations that assist in pattern identification and allow
prioritization of concepts or care needs. Rauen observes nurses at the competent and
proficient levels benefit from the use of simulation to teach CT skills. The expert
practitioner will benefit from a safe environment where they can think creatively, validate
their intuition, and articulate the thinking processes underlying their expertise.
In 1999, Benner, et al. observed that educators often develop teaching- learning
strategies that focus on either process or content, while not acknowledging the two are
inextricably linked. They state that experiential learning requires active participation and
is not guaranteed by the passage of time. Experiential learning requires engagement in the
situation, which produces a narrative memory assisting the nurse to act with greater skill
PN-562-02S 9
in the future. Although simulation does not have the richness or complexity of
experiential learning, it does offer opportunities to merge process with content.
Research Literature
Quantitative Research from the Nursing Literature
Only one quantitative study examining the use of simulation in the development
of CT skills in the nursing literature was identified. This study, published by Chau et al.
(2001) examined the use of simulated videotaped vignettes to develop CT and clinical
management abilities of first and second year baccalaureate nursing students.
A convenience sample of 83 students volunteered to participate in the study;
however, 99 students elected not to participate, which may have biased the sample (Polit
& Hungler, 1995). No other inclusion or exclusion criteria are given. The sample
consisted of more second year students (54.2%) and more female students (85.5 %). The
subjects' mean age was 19.8 years (range 18-23), 85.5 % had prior some work
experience, and 92% had not taken a college course on CT.
The authors used a quasi-experimental, two group pretest-posttest design, which
does not control for several threats to internal validity including history, maturation,
testing, and instrumentation, and all threats to external validity (Campbell & Stanley,
1963). The design was appropriate for the stated purpose. The authors developed a total
of eight video vignettes designed to simulate patients in the clinical setting. According to
the authors, two experts experienced in clinical teaching established face and content
validity of two of the vignettes. Validation of all scenarios would have strengthened the
PN-562-02S 10
study design. Furthermore, the authors did not state how the students' clinical
management abilities were evaluated, which was part of the stated purpose.
Critical thinking was measured before and after the intervention using the
California Critical Thinking Skills Test (CCTST) and nursing knowledge tests (NKT)
developed by the authors. The CCTST has been shown to be a valid and reliable
instrument as evidenced by a Kuder-Richardson 20 score of 0.74 (Chau et al. 2001). In
addition to the CCTST, the NKT were developed to measure CT specific to each
vignette. The NKT were administered after the first and fourth vignettes. The authors
report content and face validity of these instruments were determined by six experienced
nurse educators, though their qualifications are not discussed. Omission of this data
impacts instrument validity and reliability.
The authors used an analysis of variance (ANOVA) to determine the effect of the
educational intervention on CCTST and NKT scores using the independent variables (IV)
of year and pre-post test indicator. A multiple comparison test was used to identify
differences according to the students' year. It appears they appropriately used an ANOVA
to analyze the difference between mean scores (dependent variable [DV], interval data
[ID]) of the two groups while testing year and pre-post test indicator (IV, nominal data
[ND]) (Polit & Hungler, 1995). The multiple comparison test was appropriately used to
compare different pairs of means between the two groups and Scheffe's test was
appropriately used to determine where the differences were between groups. The level of
significance was set at p < .05.
PN-562-02S 11
Results from this study are reported in Appendix A. The only statistically
significant result was an increase in mean posttest NKT scores compared to pretest NKT
scores for first year students (31.51 versus 24.37, p = 0.01, see Table A2). The
educational intervention failed to produce a measurable improvement in the pretest-
posttest CCTST scores by year or between the two years (see Table A3). The authors'
major conclusion is the vignettes improve knowledge. This conclusion is not justified
considering the quasi-experimental design, convenience sampling methods, and the use
of a tool for which the validity and reliability was not well established. In addition, only
the first year students showed an increase in posttest NKT scores. The authors
acknowledge that limited exposure to the teaching method may have been inadequate to
measure gains in CT and suggest longitudinal studies may better measure CT skills.
Further, they discuss whether the use of an "acontextual" standardized CT test is an
appropriate measure of students' CT skills. Chau et al. (2001) propose future research
incorporate a control group. They do not address the generalizability of the study;
however, the use of a convenience sample from a single setting and the weaknesses of the
study design, which fails to control for any threats to external validity impact the ability
to generalize the findings to other setting or populations (Polit & Hungler, 1995).
Quantitative Research from the Medical Literature
Several studies have been published addressing the use of simulation in medical
education. The studies reviewed in this section were selected because they discuss areas
that should be considered when evaluating the use of simulation to teach CT and CJ.
PN-562-02S 12
In 2002, Morgan, et al. published a study comparing the efficacy of simulator-
based teaching to videotape teaching as measured by performance-based assessments and
written examinations. The authors recruited a convenience sample from a single
university medical school. Of a class of 177 students, 144 volunteered to participate. The
only inclusion criterion was enrollment in the fourth year of medical school. The
researchers did not exclude anyone belonging to this group; however, 33 students
excluded themselves, which may have resulted in selection bias (Polit & Hungler, 1995).
The authors used a quasi-experimental study design, which included a pretest and
posttest and randomization to treatment group. Although the design was appropriate for
the stated purpose, it was subject to several threats to internal validity including history,
maturation, selection, testing, and instrumentation; however, these threats were likely
minimized by the short study period. The study design did not control for any threats to
external validity including the Hawthorne effect, novelty effect and experimenter effect.
The study protocol is well described. The faculty used course objectives to create
three scenarios with the corresponding pre and posttests. The faculty and senior residents
involved in teaching were informed of the purpose, learning objectives and received
training. Faculty members scoring student performance were trained in the use of the
tool. After orientation to the simulation lab, students took a 5-minute performance-based
pretest scored by a faculty member. Students were then randomized in groups of five and
received 1.5 hours of education using either the simulator or videotape. Those in the
simulation group actively made medical judgments and managed a simulated crisis.
Those in the video group were shown a scenario depicting a faculty member
PN-562-02S 13
appropriately managing a simulated crisis. Three hours after the educational session, the
students completed the posttest. Pretest and posttest content was identical and was
designed to test their ability to actively manage the same critical event presented in the
educational session. Scoring was based on the student's ability to identify the problem,
make a differential diagnosis, articulate an appropriate treatment plan, and identify either
a pharmacological intervention or the precipitating cause of the crisis. The students took a
written exam based on the scenario content 2, 16 or 30 days after the intervention.
Information regarding how measurement validity and reliability was established is
not provided. Since the pretest and posttest contained identical content, testing and
instrumentation bias threatens the study's internal validity (Polit & Hungler, 1995). The
authors did not videotape the tests, which would have allowed them to use more than one
evaluator and establish interrater reliability. Videotaping the tests would also have
allowed them to blind the evaluator to whether they were scoring the pretest or posttest,
which would have strengthened measurement validity (Hulley et al., 2001). Omission of
this information impacts the reliability of the findings.
Morgan et al (2002) appropriately used a repeated-measures ANOVA to compare
pretest and posttest scores (DV, ID) of all students participating in the three scenarios
(IV, ND) (Polit & Hungler, 1995). Using an ANOVA allowed the authors to analyze the
effect both between subjects (educational intervention) and within subjects (pretest-
posttest scores). A repeated-measures ANOVA was also used appropriately to compare
the three scenarios and the educational intervention (IV, ND) to the pretest and posttest
scores (DV, ID). Four analyses were performed using repeated-measures ANOVA. This
PN-562-02S 14
allowed the results (DV, ID) of each individual scenario to be analyzed as an isolated
group. In addition, a univariate ANOVA was appropriately used to compare the score on
questions from the written exam (DV, ID) for each of the three scenarios to the type of
education received (simulation versus video) (IV, ND). Questions were analyzed
individually and the authors used the Scheffe test post hoc to detect differences between
the groups. Statistical significance was set at p < .05.
All data are presented in Appendix B. The most significant finding of this study
was that there were no statistically significant differences in pretest-posttest scores based
on educational intervention when all scenarios were analyzed together (F = 1.099, p =
0.296). Not surprisingly, significant improvement in pretest-posttest scores was seen for
all three scenarios (F = 252. 4, p = <.001). The data analysis performed on individual
scenarios indicated the effect of educational intervention did not improve test scores (see
Table B1). Data from the written exams pertaining to scenario content were also
analyzed. Due to exam content, some students answered more than one scenario question.
Results from these analyses reveal no statistically significant differences in test scores
between the two educational interventions for any of the three scenarios (see Table B2)
regardless of how much time elapsed between the education and the written exam.
In a discussion of the findings, Morgan et al. (2002) expected education to
improve posttest scores. They were surprised to find no significant difference between
simulation and videotape teaching methods. It is possible that the threats to internal
validity contributed to these results. They acknowledge the lack of a control group
weakened the study design. They originally sought to include a control group but altered
PN-562-02S 15
their design due to ethical concerns regarding withholding a potentially superior teaching
method from the students. Unfortunately, absence of a control group makes it impossible
to compare the efficacy of simulation or videotape education to more traditional, and less
costly, methods of instruction. Although the authors do not discuss the generalizability of
their study, the sampling design and threats to external validity limit the ability to
generalize the findings.
In 2001, Marshall, et al. published a study evaluating the role of the human
patient simulator (HPS) in the training of surgical interns. The purpose of the study was
to examine the impact of the HPS on self-confidence and to evaluate if the HPS
combined with an Advanced Trauma Life Support (ATLS) class improved trauma
management skills. The authors recruited a sample of 12 surgical interns divided into
three teams. Although it appears they used a convenience sample, no information is
provided regarding the sampling plan or inclusion and exclusion criteria. The authors
state that none of the participants had prior experience with a HPS or ATLS.
Marshall et al. (2001) used a quasi-experimental, two group pretest-posttest
design, which fails to control for several threats to internal validity including history,
maturation, testing, and instrumentation, and all threats to external validity (Campbell &
Stanley, 1963). To familiarize the interns with the equipment, the study protocol required
them to participate in a practice scenario prior to the pretest. Each group of four interns
completed a pretest consisting of two trauma scenarios using the HPS over a 2-day
period. They then completed a 2-day ATLS class. Following the class, the interns
completed the posttest, which was identical to the pretest. The team administering both
PN-562-02S 16
tests consisted of the scenario moderator and an evaluator (attending trauma surgeon).
Each team was videotaped during the scenario. Information regarding how face and
content validity were established for the scenarios is not provided. Trauma management
skills were evaluated in three areas: critical treatment decision (CTD) (the ability to
identify and manage life-threatening injuries), potential for adverse outcomes (PAO) (the
ability to recognize and manage comorbid conditions related to the injury, and team
behavior (TB) (the ability to work together effectively). Two faculty members (one who
was present and one who later viewed the video) scored the teams using a five-point scale
(1 = poor; 5 = excellent). Each team was scored based on their ability to perform patient
assessment and management skills in the correct sequence. Self-confidence was
measured prior to the first HPS session and immediately following the last HPS session.
Marshall et al. (2001) took steps to enhance measurement validity and reliability,
although areas of weakness remain. To increase measurement validity and reliability,
tests were videotaped and two faculty members who alternated between real-time and
video evaluation scored each scenario (Hulley et al. 2001). They do not report if interrater
reliability was achieved, which would have increased measurement reliability. They also
used two teams of senior residents as controls to validate the skills scoring on the HPS.
Unfortunately, the use of an identical test threatens the internal validity of the study in the
areas of testing and instrumentation (Hulley et al. 2001).
The authors do a thorough job of explaining the statistical analyses. A Mood's
median test was appropriately used to test for outliers and errors in the data (Nottingham
Trent University, 2001). The Wilcoxon signed-rank test was appropriately used to test the
PN-562-02S 17
paired pretest-posttest scores (DV, ordinal data [OD]) for the effect of the HPS and
ATLS class (IV, ND) (Polit & Hungler, 1995). A Kruskel-Wallis test was used
appropriately to test the differences between the three groups of interns (Polit &
Hungler). When significant differences between two groups were found, the Mann-
Whitney U test was used to analyze the difference between the groups (Polit & Hungler).
The level of statistical significance chosen prior to data analyses was not provided.
All data are presented in Appendix C. This study yielded statistically significant
results in all analyses performed. Performance on all measures increased after the HPS
and ATLS classes. The increase in scores after the HPS/ATLS class are as follows: CTD
increased 24% p = 0.002; PAO increased 25% p = 0.001; TB increased 47% p < 0.001.
In addition, the mean score on the self-rating of confidence increased significantly from
5.8 (SD 0.9) to 8.1 (SD 0.5) p = 0.01.
Marshall et al. (2001) conclude that the ATLS/HPS simulation was effective in
increasing the ability to manage trauma patients and self-confidence levels. They believe
increased confidence levels leads to greater self-efficacy. They hope increased self-
efficacy will result in interns trusting their assessment findings causing them to initiate
appropriate treatments. One of the limitations of the simulator is that not all trauma
scenarios can be reproduced, nor is it necessarily true that performance in a simulator will
transfer to the clinical arena; however, they praised the simulator for its effectiveness in
building teamwork.
Marshall et al. (2001) do not discuss study limitations or the generalizability of
the results. One criticism of the study is the omission of information regarding the
PN-562-02S 18
sampling plan. Most likely, a convenience sample from a single site was used. This
omission combined with a study design, which fails to control for internal and external
threats to validity, impacts the validity and generalizability of the findings.
In 2001, Rogers, Jacob, Rashwan and Pinsky published a study designed to
quantify learning in fourth year medical students after a critical care medicine (CCM)
elective. This study examined disparities between written evaluation methods and
evaluation using simulation. The purpose of the study was to examine three evaluation
methods (written exam, objective structured clinical exam [OSCE] and HPS) to
determine which methods best evaluates student learning. A convenience sample of 24
fourth year medical students enrolled in the CCM elective volunteered to participate in
the study. No information is given regarding how many students declined to participate,
making it impossible to know if the sample is biased. In addition, inclusion and exclusion
criteria are not stated, nor is any additional information about the sample provided.
The authors used a randomized crossover pretest-posttest design, which was an
appropriate design for the stated purpose. One advantage of this design is confounding
variables are minimized because each subject acted as his own control (Hulley et al.,
2001). This substantially increases the statistical power of the design allowing for smaller
sample sizes. Learning was evaluated by comparing student performance on each of the
three exams: written, OSCE and HPS. All three exams were given at the beginning and
end of the elective. The order of the exams was randomized. Two scenarios were
prepared and students were randomized to one of them for all three pretest exams.
PN-562-02S 19
The written exam (14 multiple-choice questions) covered learning objectives. The
OSCE used an actress simulating dyspnea to test the students' ability to evaluate a live
patient. Two nurse educators familiar with both scenarios were randomized to OSCE and
HPS groups and to the pre- and post-rotation exams. The nurses provided data regarding
the patient to the students taking the OSCE. In the HPS exam, students were presented
with scenario I or II and required to evaluate the computer-simulated patient. Students
used their own assessment skills and CJ to determine the patient problem, decide on a
plan of care and to interpret the patient's response to their interventions. After the CCM
elective was completed, students were crossed over to the other scenario for all three
posttest exams. The exams followed the same protocol. Exam questions tested similar
content, which was based on key learning objectives.
Since the purpose of this study was to compare three methods used to evaluate
learning, measurement validity and reliability were very important. A board-certified
CCM faculty member evaluated the questions to ensure their appropriateness enhancing
content validity. It is unclear if the authors used a physician not involved in the study to
minimize instrument bias (Hulley et al., 2001). OSCE and HPS evaluation criteria
consisted of the same set of behavioral expectations, which if performed, were scored.
Student had to perform critical behaviors within a set time frame to receive points.
Additional safeguards were used to enhance measurement validity and reliability for the
OSCE and HPS exams. These include standardization of written scoring systems and the
blinding of the evaluator (a single CCM faculty member) as to which test was being
scored (pretest or posttest). HPS exam validity was also enhanced by the ability of the
PN-562-02S 20
computer to consistently execute the scenario. The use of standardized scoring tools and
videotaping the exam enhanced measurement reliability. Rogers et al. (2001) state
interrater reliability was achieved, however, the kappa coefficient is not given.
Rogers et al. (2001) provide little data or explanation on the statistical analyses
done making it difficult to critique the data analyses. They state: "Several analysis of
variance techniques were used to analyze our results" (p. 1271). They also indicate that
both between and within group comparisons were done to compare test scores both
between tests, test techniques, and subjects and within subjects over time. Hopefully, they
used the following statistical tests to analyze the data (Polit & Hungler, 1995). A one-way
ANOVA should have been used to test the mean differences in exam scores (DV, ID)
between the three exam groups (IV, ND). A repeated-measures ANOVA should have
been performed to test the mean differences in exam scores (DV, ID) within each of the
three groups (IV, ND). Post hoc comparisons using the Bonferroni test were performed to
determine where differences between groups are located (Glantz, 1997). The level of
statistical significance set by the investigators prior to data analyses is not given.
Data analyses are presented in Appendix D. Not surprisingly, students scored
significantly higher in all posttest exams as compared to their pretest exams regardless of
examination type (See Table D1). The change from pretest to posttest was largest for the
OSCE and the HPS exam and lowest for the written exam. The pretest scores were
similar for the OSCE and the HPS exam; however, both were significantly lower that the
written exam. Statistically significant differences were seen in the analysis of the three
posttest exams. The students scored lowest on the HPS exam, intermediate on the OSCE,
PN-562-02S 21
and highest on the written exam. When pretest scores for the written exam, OSCE and
HPS exam are compared for scenarios I & II, students scored significantly higher on the
postrotational written scenario II and the postrotational HPS scenario I.
Rogers et al. (2001) provide an extensive and persuasive discussion of their
conclusions. They argue the written exam lacks validity because it fails to measure
knowledge application in the clinical setting. They assert written exams overestimate
achievement of learning objectives because the OSCE and HPS exams demonstrated that
the students cannot apply what they have learned to a simulated patient in a realistic
clinical setting. They are also critical of the OSCE because it provides assessment data
rather than requiring the students to make their own assessments and interpret the
findings. Although Rogers et al. state the HPS and the OSCE effectively evaluate
performance, they believe HPS is a superior method when seeking to evaluate higher
cognitive functioning. The authors do not discuss any limitations to their study or to
whom the findings might be generalized. Although use of a small convenience sample
impacts the ability to generalize the findings, the validity of the findings is enhanced by
the strong study design and measurement validity and reliability (Polit & Hungler, 1995).
In 1994, Chopra, et al. published a study examining the efficacy of simulation as a
teaching method in anesthesia. The clearly stated purpose of the study was to quantify the
effectiveness of a simulator as an anesthesia training tool. The authors recruited a
convenience sample of 28 anesthesiologists and anesthesia residents from a single
hospital. Inclusion and exclusion criteria are not stated. The authors report group A (N =
13 ) had an average of 5.06 years (SD 3.43 years) anesthesia experience and group B (N
PN-562-02S 22
= 15) had an average of 4.61 (SD 4.43) years of anesthesia experience. Although it
appears the groups' experience level was similar, no statistical analysis was performed to
examine this variable.
The authors used a quasi-experimental, two group pretest-posttest design, which
did not control for several threats to internal validity including history, maturation,
testing, and instrumentation and all threats to external validity (Campbell & Stanley,
1963). A baseline assessment of the participants' ability to manage a patient in
anaphylactic shock served as the pretest. The subjects were randomized to group A or B.
Simulator training was provided to each group 2 to 3 weeks after the baseline assessment.
Group A received training on the management of patients in anaphylactic shock (AS);
group B received training in the management of patients with malignant hyperthermia
(MH). Four months after the training, each group was eva luated on their ability to
manage a simulated patient using a scripted scenario for MH only. Interestingly, Chopra
et al. (1994) do not explain their rationale for training group A on AS and then testing
them on MH. If they had decided to not provide group A with training, it could have been
used as a control group, creating an experimental study. Details regarding the training
session content were not provided, precluding the reader from evaluating the
intervention. The pretest, posttest and training sessions were all videotaped. Subjects
were scored based on empirical scoring systems developed by the authors. Scoring
criteria were based on the amount of time from the start of the scenario to the first
intervention (response time), how well the interventions adhered to published guidelines
PN-562-02S 23
(treatment score), and whether the interventions deviated from the guidelines (deviation
score). These scores were added to obtain a total performance score.
Chopra et al (1994) took care to enhance measurement validity but areas of
weakness remain. Although it is not reported how the scoring tool was validated, it was
based on established guidelines, increasing its content validity. An investigator not
involved in the simulator sessions viewed the videotapes and used a written scoring tool
to score the pretest and posttest reducing the potential for bias (Hulley et al. 2001);
however, it was not blinded. Measurement reliability could have been strengthened if the
authors established the interrater reliability of the scoring tool.
Chopra et al (1994) report an analysis of covariance (ANCOVA) was used to
compare response times, treatment scores, deviation scores, and total performance scores
(DV, ID) of group A and B subjects during the posttest, adjusted for their respective
scores during the pretest. An ANCOVA was used to eliminate "the effect of any
interindividual variability on the final analysis" (p. 295). Polit and Hungler (1995) state
the ANCOVA controls for extraneous variables, even in randomized groups, so that the
final analysis more precisely reflects the experimental intervention effect, enhancing the
significance of the results. A p value of <0.05 was considered significant.
All data are presented in Appendix E. Statistically significant findings between
groups A and B were seen all four posttest scores (see Table E1). Chopra et al. (1994)
report the average differences in posttest score for group B, who received the MH
training, as compared to group A are as follows: response times 76.5 (SD 27.3) p = 0.01;
treatment score 10.7 (SD 4.9) p = 0.04; deviation scores 11.5 (SD 4.1), p = 0.01; and total
PN-562-02S 24
performance posttest scores 22.3 (SD 8.3), p = 0.01), These scores were significantly
better than the average scores of the group A subjects.
Chopra et al. (1994) assert their study shows the subjects trained using the
simulator were able to respond more quickly, intervene more appropriately and with less
deviation in emergent situations such as MH than subjects who did not receive this
training. While it is true that the randomization, use of the pretest and ANCOVA analysis
worked to minimize the effect of confounding variables, lack of a control group impacts
the ability to attribute the effect to the training. The authors provide a thorough
discussion of the study's limitations. They acknowledge the possibility of observer bias or
that some of the subjects had knowledge regarding the posttest scenario prior to being
tested, which may have altered their performance. Perhaps more importantly, Chopra et
al. are candid in their admission that performance using a simulator may not predict
performance in the clinical setting. Certainly subjects participating in a simulation are
more likely to anticipate an adverse event compared to the anesthesiologist who is
participating in what is thought to be a routine case. As with the previous studies, the
small convenience sample from a single setting and the threats to external validity impact
the ability to generalize the findings.
Discussion and Application
This critical literature review has presented a discussion of the significance of
simulation for nursing practice, an examination of the theoretical base for its use, and a
critique of five quantitative articles from the nursing and medical literature examining its
effectiveness. This section of the review will discuss significant research findings, gaps in
PN-562-02S 25
the literature and the implications for future research as it pertains to the efficacy of
simulation to teach CT and CJ.
Significant Accomplishments
In the critique of the theoretical base, a small sample of the published work
addressing the links between CT, CJ and excellence in nursing practice was presented.
Nursing scholars have unequivocally demonstrated the link between CT and CJ (Alfaro-
LeFevre, 1999; Facione & Facione, 1996a). Their work demonstrates that CT and CJ are
high- level cognitive processes that involve much more than the memorization of facts.
The ability of the nurse to synthesize and apply knowledge is an active process, best
taught using active, experiential and reflective teaching methods (del Bueno, 1983;
Schank, 1990). The emergence of ALT and CLT have contributed to alterations in
curricula that support this type of active experiential learning as evidenced by the
emergence of new teaching modalities including simulation, case studies, interactive
computer-assisted instruction, journal writing and classes using seminar formats.
The advantage simulation has over other modalities is its ability to realistically
recreate the clinical setting (Morton, 1997). Although simulation will never replace the
need for clinical education, it will allow nurses to gain experience in caring for high-risk
patients without jeopardizing the patient's safety. If we extrapolate from the theory
advanced by Benner, Hooper-Kyriakidis and Stannard (1999), simulation used
appropriately by an experience clinical educator has the potential to assist a nurse in
creating the beginnings of an experiential foundation for clinical practice. In this sense, it
serves as a bridge between theoretical knowledge and clinical practice. In addition,
PN-562-02S 26
simulation can develop CJ skills in nurses across the experiential continuum (Rauen,
2001). This is especially important because nurses who do not have a disposition towards
CT and engage actively in learning and reflection throughout their clinical practice will
not succeed in developing CJ (Facione & Facione, 1996a). Simulation can assist in
remediating CT deficiencies for these nurses.
The researchers who published the five quantitative studies examined in this
review have advanced the body of knowledge on the topic. The study published by
Rogers et al. (2001) performed a valuable service to the field of education by
demonstrating quantitatively that the written exam is not a valid measure of clinical
performance. This study validates the observations of Wong (1979) and Tanner (1987)
and reinforces the need to evaluate student learning in an experiential setting. Marshall et
al. (2001) demonstrated the utility of simulation when they discovered it reveals student's
knowledge deficits. This allows educators to modify their teaching plans to meet the
needs of both the individual and the group. It can also be used to ensure key concepts can
be demonstrated in the clinical setting showing the application of knowledge.
The study by Marshall et al. (2001) demonstrated that participation in simulated
scenarios increased interns' confidence levels. This is an important finding because as the
authors point out, novice practitioners tend not to trust their assessment findings. When
abnormal findings are dismissed, needed treatments are often not initiated. In the ICU
setting this can result in disaster for the patients. In addition, ICU nurses receive orders
requiring them to titrate interventions based on their assessment skills. If simulation can
increase the confidence nurses have in these crucial skills, the patients will benefit.
PN-562-02S 27
The study by Chopra et al. (1994) demonstrated that anesthesia practitioners who
were trained with simulators demonstrated faster response times and adhered more
closely to established guidelines than practitioners who did not receive the training. This
study also holds promise for nursing education. Simulation can be used to teach not only
theoretical concepts and skills, but also to teach procedures and protocols. For example,
pulmonary artery (PA) line insertion and maintenance are core competencies for critical
care nurses. The simulation lab could be used to teach the procedure and the care of the
patient with a PA line in a much more meaningful and realistic way than traditional
methods that involve reading the procedure and seeing it demonstrated.
Gaps in the Literature/Implications for Future Research
The greatest gap in the literature is the absence of quantitative studies addressing
the efficacy and utility of simulation in nursing education. A review of the literature
revealed the existence of four nursing simulation labs (two in academic settings; two
operated by the military). As described earlier, two professors who have pub lished
articles advocating for the use of simulation in nursing education were contacted. Neither
professor has conducted research relating to their labs. In addition, both professors
acknowledged they were not aware of any published quantitative nursing research. The
absence of nursing research regarding simulation is problematic. Although there are great
similarities between medical and nursing education, that does not mean that the results
can necessarily be generalized to nursing. All of the studies critiqued in this review
lacked a control group; therefore, it is difficult to say if improvements in performance
were caused by the use of simulation. In fact, Morgan et al. (2002) found that simulation
PN-562-02S 28
was not superior to videotape as a teaching method. In addition, all of the medical studies
used relatively small convenience samples from single sites. As stated, the use of small
convenience samples from single sites impacts ability to generalize the finding to other
populations and settings (Polit & Hungler, 1995). Furthermore, at least three of these
studies used volunteer samples, which increases the likelihood of sampling bias.
Simulation will be advanced in nursing when well-designed experimental studies
are conducted. As stated, a weaknesses of the studies reviewed is that none of them used
a control group. As such, they were all subject to threats to internal and external validity
(Polit & Hungler, 1995). Although ethical concerns regarding withholding a superior
teaching method from students are understandable, the current state of the evidence is
that the superiority of simulation has yet to be established. Until studies using a control
group are published, the efficacy of simulation in CT and CJ development will continue
to be debated.
Although the study by Chau et al. (2001) did not demonstrate the efficacy of
simulation, it addressed another very pertinent issue in nursing research. In the authors'
discussion of how to best measure CT and CJ, they question the use of acontextual
standardized tests. One of the most problematic areas for nurse researchers studying the
efficacy of simulation is how to measure applied CT and CJ. Although these tests are
valuable tools, their appropriateness in measuring applied CT and CJ in the context of
simulation can be debated. Are they a direct measure of a nurse's ability to synthesize,
integrate and apply theoretical concepts and clinical findings into appropriate action in a
constructed patient scenario? It is interesting to note in the medical studies reviewed that
PN-562-02S 29
subjects were scored based on their performance of behaviors consistent with the
appropriate management of specific patient types. When Chau et al. designed their study
they created the NKT in an attempt to measure the students' knowledge of the specific
patient types covered in their vignettes. They were able to quantify a difference in
knowledge in the first year students using this tool but no increase in CT skills were
quantified using the CCTST. This leads one to question whether the CCTST is an
appropriate test for this application or whether behavioral tools are a better measure of
applied CT and CJ. To determine if these tests are valid measures of the effects of
simulation, a study could be designed that compared measurements of both CT and CJ
using both an acontextual test and a validated tool created specifically for the simulated
scenarios. Establishing the validity and reliability of the tools used to measure applied CT
and CJ will be a major advance for nursing research.
An area requiring additional inquiry is the validity of simulation to predict clinical
performance. Chopra et al. (1994) raised this concern in the discussion of their findings.
Certainly, simulation is an artificial environment. When the research setting does not
represent the practice environment, the study is subject to threats to external validity
including Hawthorne, novelty, experimenter, measurement effects and the interaction of
history and treatment effect (Polit & Hungler, 1995). These threats to external validity
can be mitigated by replication of the findings in other settings and environments. This is
yet another argument for nurse researchers to contribute to body of knowledge.
PN-562-02S 30
Conclusion
Whether simulation becomes one of the many strategies available to nurse
educators to develop CT skills and CJ in nurses or a trend that falls by the wayside will
be left to nurse researchers. As we have seen in this review, there is a strong theoretical
base for its use. In addition, medical educators have published studies showing its utility
and efficacy in teaching students assessment and management skills. Anecdotal accounts
of the efficacy of simulation will not contribute to its advancement in nursing. Until nurse
researchers can demonstrate quantitatively the efficacy and utility of simulation, it will be
difficult to convince institutions to make the enormous financial investment. Proposals
for funds to finance a simulation lab will be met with the retort: "Show me the data!"
Medical researchers have worked to advance simulation in medical education, but it will
fall to nurse researchers to establish the efficacy of simulation if it is to become
widespread in nursing education.
Nurse researchers must consider many things when designing studies. They must
strive to design experimental studies using control groups; they must develop well-
validated measures; and finally, they must replicate research to minimize the threats to
external validity. Certainly, researchers who already have simulation labs run a risk if
they conduct quantitative research. If the research does not validate the efficacy of
simulation, they may be unable to justify the expense of the labs. Pioneering innovation is
not easy or without risk, but if we believe the following Chinese proverb, how can we not
answer the call: "I hear, I forget; I see, I remember; I do, I understand" (Rauen, 2002).
PN-562-02S 31
Appendix A
Statistical Data from Chau et al. (2001)
Table A1. Effect of Educational Intervention on Knowledge for Students in
Years 1 and 2
Source of Variation
Sum of Squares
Degrees of
Freedom
Mean Square F p values
Educational intervention (pre-post test)
992.33 1 992.33 23.99 <0.001
Year 62.76 1 62.76 1.52 0.222 Interactions 189.82 1 189.82 4.59 0.034 Residual 6536.00 158 41.37 Totals 7714.24 161 47.92
Table A2. Mean Knowledge Test Results for Students in Years 1 and 2
Year of Study
Pretest Mean (SD)
Posttest Mean (SD)
Pretest-Posttest
Difference
Scheffe's Least
Significant Difference
p values
Year 1 24.37 (6.39) 31.51 (4.31) 7.14 5.93 0.01 Year 2 25.30 (7.70) 28.09 (6.56) 2.80 4.48 >0.05
Table A3. Effect of Educational Intervention on Critical Thinking Skills for
Students in Years 1 and 2
Source of Variation
Sum of Squares
Degrees of
Freedom
Mean Square F p values
Educational intervention (pre-post test)
0.17 1 0.17 0.01 0.93
Year 5.49 1 5.49 0.28 0.60 Interactions 5.95 1 5.95 0.31 0.58 Residual 3146.95 162 19.43 Total 3158.77 165 19.14
PN-562-02S 32
Appendix B
Statistical Data from Morgan et al. (2002).
Table B1. Pretest and Posttest Results for Simulator and Video Teaching in Three
Scenarios (Mean and SD)
Scenario 1: Myocardial Ischemia (N = 43) Simulator Video F p* Pretest score (0-12) 6.48 +/- 2.20 6.05 +/- 2.46 Posttest score (0-12) 10.95 +/- 1.75 11.14 +/- 1.17
0.525 0.47
Scenario 2: Anaphylaxis (N = 48) Pretest score (0-12) 5.92 +/- 2.28 6.55 +/- 2.46 Posttest score (0-12) 11.08 +/- 1.26 10.41 +/- 1.44
2.982 0.09
Scenario 3: Hypoxemia (N = 53) Pretest score (0-12) 7.78 +/- 1.73 8.17 +/- 2.31 Posttest score (0-12) 8.78 +/- 1.83 9.10 +/- 1.67
0.010 0.92
*Denote significance of difference of pretest to posttest scores according to educational intervention used (simulation vs. videotape).
Table B2. Mean and SD of Written Exam Marks on Focused Questions
Scenario 1: Myocardial
Ischemia (0-10)
Scenario 2: Anaphylaxis (0-10)
Scenario 3: Hypoxemia (0-10)
Simulation Group 7.34 +/- 1.7 N=26
8.10 +/- 1.35 N=29
8.28 +/- 2.31 N=37
Videotape Group 7.61 +/- 1.30 N = 22
7.98 +/- 1.41 N=27
8.40 +/- 1.95 N = 46
Results reported include all written examination marks (combined results: 2, 16 or 30 days after simulation or videotape session).
PN-562-02S 33
Appendix C
Statistical Data from Marshall et al. (2001).
Table B1. Trauma Management Skill Scores*
Skill Area First HPS Session (Pre-ATLS Course)
Second HPS Session Post-ATLS Course
p Value**
Critical Treatment Decision (CTD)
1.7 (0.4) 2.1 (0.6) 0.002
Potential for Adverse Outcome (PAO)
1.6 (0.6) 2.0 (1.0) 0.001
Team Behavior (TB)
1.7 (0.3) 2.5 (0.4) <0.001
*Trauma Management Skill Scale: 1 = poor performance; 5 = excellent performance. Data is presented as mean and (SD). **Wilcoxon signed rank test HPS: human patient simulator; ATLS: Advanced Trauma Life Support
PN-562-02S 34
Appendix D
Statistical Data from Rogers et al. (2001).
Table D1. Comparison of Evaluation Results
Prerotation (%) Postrotation (%) Written examination 77 (16)* 89 (11)** OSCE 47 (15) 76 (12) HPS 41 (14) 62 (15) *p = < .001 written > OSCE and HPS **p = < .001 written > OSCE > HPS OSCE: Objective Structured Clinical Examination HPS: Human Patient Simulator Figure 1. Prerotation Figure 2. Postrotation
0
1020
30
4050
6070
8090
Written
OSCE
HPS
Scenario1
Scenario 2
0102030405060708090
100W
ritten
OSCE HP
S
Scenario1
Scenario 2
Examination results by case scenarios for pre to postrotation test. Scenario 1: Sepsis with hypotension. Scenario 2: Myocardial infarction with hypotension * p < .02 ** p < .001
PN-562-02S 35
Appendix E
Statistical Data from Chopra et al. (1994).
Table E1. Effect of Training on Group B Subjects as Measured in Posttest 3 Mean
and (Standard Deviation)
Group A Group B Difference during Posttest Variable Pretest Posttest Pretest Posttest Estimate 95% CI p value Response time(s)
66.7 (31.3)
216 (79.2)
53.4 (19.3)
157.5 (74.3)
76.5 (27.3)
23-130 0.01
Treatment score(s)
72.3 (11.5)
69.6 (13.7)
76.7 (10.3)
81.3 (11.9)
10.7 (4.9)
1.2-20.3
0.04
Deviation score(s)
18.8 (10.4)
22.7 (12.5)
18.0 (8.6)
11.0 (9.3)
11.5 (4.1)
3.5-19.5
0.01
Total Performance score (s)
53.5 (18.1)
46.9 (23.4)
58.7 (16.3)
70.3 (19.6)
22.3 (8.3)
6.0-38.6
0.01
Statistical significance = p < 0.05
PN-562-02S 36
References
Adams, B. L. (1999). Nursing education for critical thinking: An integrative review.
Journal of Nursing Education, 38 (3), 111 – 119.
Alfaro-LeFevre-LeFevre, R. (1999). Critical thinking in nursing: A practical approach.
(2nd ed.). Philadelphia: W. B. Saunders Company.
Alspach, J. G. (1995). The educational process in nursing staff development. St. Louis:
Mosby.
American Association of Colleges of Nursing (2002). Nursing shortage fact sheet.
Available online: www.aacn.nche.edu/Media/Backgrounders/shortagefacts.htm.
American Association of Colleges of Nursing (1998). The essentials of baccalaureate
education for professional nursing practice. Washington, DC: Author.
Bastable, S. B. (1997). Nurse as educator: Principles of teaching and learning. Sudbury:
Jones and Bartlett Publishers.
Benner, P. (1984). From novice to expert: Excellence and power in clinical nursing
practice. Menlo Park: Addison Wesley.
Benner, P., Hooper-Kyriakidis, P., & Stannard, D. (1999). Clinical wisdom and
interventions in critical care: A thinking in action approach. Philadelphia: W. B.
Saunders.
Campbell, D. T. & Stanley, J. C. (1963). Experimental and quasi-experimental designs
for research. Boston: Houghton Mifflin.
Case, B. (1995). Critical thinking: Challenging assumptions and imagining alternatives.
Dimensions of Critical Care Nursing, 14 (5), 274 – 279.
PN-562-02S 37
Chau, J. P., Chang, A. M., Lee, I. F., Ip, W. Y., Lee, D. T., Wootton, Y. (2001). Effects
of using videotaped vignettes on enhancing critical thinking abilities in a
baccalaureate nursing programme. Journal of Advanced Nursing, 36 (1), 112 – 119.
Chopra, V., Gesink, B. J., de Jong, J., Bovill, J. G., Spierdijk, J., & Brand, R. (1993).
Does training on an anesthesia simulator lead to an improvement in performance?
Anesthesiology, 79 (3A), A1117.
del Bueno, D. J. (1983). Doing the right thing: Nurses' ability to make clinical decisions.
Nurse Educator, 8 (3), 7-11.
Dobbin, K. R. (2001). Applying learning theories to develop teaching strategies for the
critical care nurse: Don’t limit yourself to the formal classroom lecture. Critical
Care Clinics of North America, 13 (1), 1 – 11.
Dobrzykowski, T. M. (1994). Teaching strategies to promote critical thinking in nursing
staff. The Journal of Continuing Education in Nursing, 25 (6), 272 – 276.
Eaves, R. H. & Flagg, A. J. (2001). The U. S. Air Force pilot simulated medical unit: A
teaching strategy with multiple applications. Journal of Nursing Education 40 (3),
110-115.
Elliot, D. D. (1996). Promoting critical thinking in the classroom. Nurse Educator, 21 (2),
49 – 51.
Facione, N. C. & Facione, P. A. (1996a). Assessment design issues for evaluating critical
thinking in nursing. Holistic Nursing Practice, 10 (3), 41 – 53.
Facione, N. C. & Facione, P. A. (1996b). Externalizing the critical thinking in knowledge
development and clinical judgment. Nursing Outlook, 44 (3), 129 – 136.
PN-562-02S 38
Facione, N. C. Facione, P. A. & Sanchez, C. A. (1994). Critical thinking disposition as a
measure of competent clinical judgment: The development of the California Critical
Thinking Disposition Inventory. Journal of Nursing Education, 33, 345 – 350.
Gaba, D. M. & DeAnda, A. (1988). A comprehensive anesthesia simulation environment:
Re-creating the operating room for research and training. Anesthesiology, 69, 387-
394.
Glantz, S. A. (1997). Primer of biostatistics (4th ed.). New York: McGraw-Hill.
Good, M. L., Gravenstein, J. S., Mahla, M. E., White, S. E. Banner, M. J., Carovano, R.
G. & Lampotang, S. (1992). Can simulation accelerate the learning of basic
anesthesia skills by beginning anesthesia residents? Anesthesiology, 77 (3A),
A1133.
Gordon, J. A. (2000). The human patient simulator: Acceptance and efficacy as a
teaching tool for students. Academic Medicine, 75 (5) 522.
Gravenstein, J. S. (1988). Training devices and simulators. Anesthesiology, 69 (3),
295 – 297.
Hickman, J. S. (1993). A critical assessment of critical thinking in nursing education.
Holistic Nursing Practice 7, 36-47.
Hulley, S. B., Cummings, S. R., Browner, W. S., Grady, D., Hearst, N. & Newman, T. B.
(2001). Designing clinical research. (2nd ed.). Philadelphia: Lippincott, Williams &
Wilkins.
PN-562-02S 39
Jha, A. K., Duncan, B. W., Bates, D. W. (2001). Simulator-based training and patient
safety, making health care safer: A critical analysis of patient safety practices.
(AHRQ Publication 01-E058). Rockville, MD: Author.
Kintgen-Andrews, J. (1991). Critical thinking and nursing education: Perplexities and
insights. Journal of Nursing Education, 30 (4), 152 – 157.
Marshall. R. L., Smith, J. S., Gorman, P. J., Krummel, T. M., Haluck, R. S., & Cooney,
R. N. (2001). Use of a human patient simulator in the development of resident
trauma management skills. The Journal of Trauma 51 (1), 17 - 21.
Maynard, C. A. (1996). Relationship of critical thinking ability to professional nursing
competence. Journal of Nursing Education, 35 (1), 12 – 18.
Morgan, P. J., Cleave-Hogg, D., McIlroy, J., & Devitt, J.H. (2002). Simulation
technology: A comparison of experiential and visual learning for undergraduate
medical students. Anesthesiology, 96 (1), 10 – 16.
Morton, P. G. (1996). Using a critical care simulation laboratory to teach students.
Critical Care Nurse 17 (6), 66 - 69.
Morton, P. G. (1997). Creating a Laboratory that simulates the critical care environment.
Critical Care Nurse 16 (6), 76 - 81.
National League for Nursing Accrediting Commission (2001). Accreditation Manual.
Available online: www.nlnac.org.
PN-562-02S 40
Nottingham Trent University (2001). Non-parametric tests in minitab. Available online:
http://science.ntu.ac.uk/msor/mjb/minitab/nonparatst.html.
Oermann, M. H. (1998). How to assess critical thinking in clinical practice. Dimensions
in Critical Care Nursing, 17 (6), 322 – 327.
Polit, D. F. & Hungler, B. P. (1995). Nursing research: Principles and methods (6th ed.).
Philadelphia: Lippincott, Williams & Wilkins.
Rauen, C. A. (2001). Using simulation to teach critical thinking skills: You can't just
throw the book at them. Critical Care Nursing Clinics of North America, 13 (1),
Rauen, C. A. (2002). Simulation as a teaching strategy in critical care. Manuscript
submitted for publication.
Rogers, P. L., Jacob, H., Rashwan, A. S. & Pinsky, M. R. (2001). Quantifying learning in
medical students during a critical care medicine elective: A comparison of three
evaluation methods. Critical Care Medicine, 29 (6), 1268 - 1273.
Schank, M. J. (1990). Wanted: Nurses with critical thinking skills. The Journal of
Continuing Education in Nursing, 21 (2), 86 – 89.
Tanner, C. A. (1987). Teaching clinical judgment. Annual Review of Nursing Research 5,
153-173.
Vandrey, C. I. & Whitman, K. M. (2001). Simulator training for novice critical care
nurses: Preparing providers to work with critically- ill patients. American Journal of
Nursing, 101 (9), 24 GG- 24LL.
Weis, P. A. & Guyton-Simmons, J. (1998). A computer simulation for teaching critical
thinking skills. Nurse Educator, 23 (2), 30 - 33.
PN-562-02S 41
Whiteside, C. (1997). A model for teaching critical thinking in the clinical setting.
Dimensions in Critical Care Nursing, 16 (3), 152 – 165.
Wisser, S. H. (1974). Those darned principles. Nursing Forum 13, 386 - 392.
Wong, J. (1979). The inability to transfer classroom learning to clinical nursing practice:
A learning problem and its remedial plan. Journal of Advanced Nursing, 4,
161 -168.