194
The Effects of Interpolated Lectures, Self-Testing, and Notetaking on Learning from a Science Video Lecture by Eevin Jennings, B.S., M.A. A Dissertation In Experimental Psychology: Cognition & Cognitive Neuroscience Submitted to the Graduate Faculty of Texas Tech University in Partial Fulfillment of the Requirements for the Degree of DOCTOR OF PHILOSOPHY Approved Roman Taraban, Ph. D. Chair of Committee Philip Marshall, Ph. D. Michael Serra, Ph. D. Tyler Davis, Ph. D. Mark Sheridan, Ph.D. Dean of the Graduate School August, 2018

The Effects of Interpolated Lectures, Self-Testing, and

  • Upload
    others

  • View
    5

  • Download
    0

Embed Size (px)

Citation preview

The Effects of Interpolated Lectures, Self-Testing, and Notetaking on Learning from a

Science Video Lecture

by

Eevin Jennings, B.S., M.A.

A Dissertation

In

Experimental Psychology: Cognition & Cognitive Neuroscience

Submitted to the Graduate Faculty

of Texas Tech University in

Partial Fulfillment of

the Requirements for

the Degree of

DOCTOR OF PHILOSOPHY

Approved

Roman Taraban, Ph. D.

Chair of Committee

Philip Marshall, Ph. D.

Michael Serra, Ph. D.

Tyler Davis, Ph. D.

Mark Sheridan, Ph.D.

Dean of the Graduate School

August, 2018

© 2018, Eevin Jennings

Texas Tech University, Eevin Jennings, August 2018

ii

TABLE OF CONTENTS

ABSTRACT ...................................................................................................................... vi

LIST OF TABLES ......................................................................................................... viii

LIST OF FIGURES ......................................................................................................... ix

I. INTRODUCTION ......................................................................................................... 1

Summary. ........................................................................................................................ 5

Background for the Current Study .................................................................................. 6

Interpolated Lectures ................................................................................................... 7

Self-Testing ................................................................................................................. 9

Note Revisions for Others ......................................................................................... 11

Overview ................................................................................................................... 14

Dependent Variables ..................................................................................................... 15

Note Quantity and Temporal Distribution ................................................................. 15

Free Recall Quantity and Temporal Distribution ...................................................... 16

Cued Recall ............................................................................................................... 16

Conceptual Integration .............................................................................................. 17

Metacognitive Judgments .......................................................................................... 18

Hypotheses Related to Interpolated Lectures ............................................................ 18

Hypotheses Related to Interpolated Lecture Notes ................................................... 19

Hypotheses Related to Free Recall and Temporal Distribution ................................ 20

Hypotheses Related to Cued Recall and Integration ................................................. 20

Hypotheses Related to Metacognition ....................................................................... 21

II. METHOD ................................................................................................................... 22

Participants .................................................................................................................... 22

Texas Tech University, Eevin Jennings, August 2018

iii

Design............................................................................................................................ 22

Materials ........................................................................................................................ 23

Procedure ....................................................................................................................... 25

III. STATISTICAL ANALYSES ................................................................................... 28

Demographic Analyses ................................................................................................. 28

Checking Statistical Assumptions ................................................................................. 28

Statistical Methods ........................................................................................................ 29

Data Coding................................................................................................................... 29

IV. RESULTS .................................................................................................................. 31

Demographics................................................................................................................ 31

Lecture Notes ................................................................................................................ 31

Note quantity. ........................................................................................................ 31

Temporal distribution. .......................................................................................... 31

Number and type of note revisions........................................................................ 33

Criterion Tests ............................................................................................................... 35

Free recall quantity and temporal distribution. ................................................... 35

Cued recall. ........................................................................................................... 36

Integration............................................................................................................. 38

Metacognition................................................................................................................ 39

V. DISCUSSION ............................................................................................................. 42

Interpolation .................................................................................................................. 42

Texas Tech University, Eevin Jennings, August 2018

iv

Self-Testing ................................................................................................................... 46

Note Revision for Others............................................................................................... 49

Metacognition................................................................................................................ 52

Summary of Hypotheses and Outcomes ....................................................................... 55

Limitations .................................................................................................................... 56

Future Directions ........................................................................................................... 57

VI. CONCLUSION ......................................................................................................... 60

REFERENCES ................................................................................................................ 62

APPENDICES

A. EXTENDED LITERATURE REVIEW .................................................................. 85

Learning from Lectures ................................................................................................. 86

Interactive-Constructive-Active-Passive (ICAP) Taxonomy .................................... 87

Learning from Video Lectures ...................................................................................... 95

Proactive Interference ................................................................................................ 97

Mind-wandering ........................................................................................................ 98

Notetaking ................................................................................................................. 98

Peer Involvement ..................................................................................................... 106

Spaced Lectures ....................................................................................................... 109

Self-testing ............................................................................................................... 111

Interpolated Testing ................................................................................................. 114

Research Questions ..................................................................................................... 117

Texas Tech University, Eevin Jennings, August 2018

v

The Engagement Mode of Interpolated Testing ...................................................... 117

Notetaking Assessment ........................................................................................... 121

Note Revision .......................................................................................................... 123

Note Revision for Others ......................................................................................... 124

Temporal Distribution ............................................................................................. 125

Summary .................................................................................................................. 126

B. LECTURE TRANSCRIPT AND CODING SCHEME ........................................ 128

C. CUED RECALL TOPIC SELECTION PROCESS ............................................. 151

D. EXPERIMENTAL INSTRUCTIONS.................................................................... 155

E. LECTURE NOTES CODING RUBRIC ................................................................ 163

F. DEMOGRAPHIC ANALYSES .............................................................................. 169

G. FREE RECALL CODING SCHEMA ................................................................... 172

H. CUED RECALL AND INTEGRATION CODING SCHEMA ........................... 178

Texas Tech University, Eevin Jennings, August 2018

vi

ABSTRACT

As more college lectures are delivered online, students and instructors alike must

adapt to the cognitive, metacognitive, and behavioral changes that take place. To address

these issues, research on interpolated testing has shown that memory is benefitted more

so than when lectures are interpolated with restudy sessions. However, no research has

directly compared interpolated to un-interpolated (continuous) video lectures. Therefore,

the first aim of the study will be to test whether interpolated lectures are more effective

for the encoding, retention, and integration of lecture information compared to

continuous lectures. In addition, the studies on interpolated testing have not tested

notetaking factors, the breadth of lecture information in memory, nor retention at a delay.

Therefore, a second aim of the current study is to replicate the basic effects of

interpolated testing reported in the literature, but also to examine other variables that may

extend and better explain these outcomes, as follows. Only recently have studies emerged

examining the effects of peer-dependent notetaking, as well as notetaking revision, both

of which suggest that there are additional methods to improve learning from video

lectures. Participants took notes during a 30-minute video lecture. After a 24-hour delay,

they completed tests that assessed different aspects of learning and memory for the

lecture. As predicted, interpolated lecture type was more effective for note quantity, note

revisions, and combating notetaking fatigue throughout the lecture. Self-testing did not

perform differently than note revision or restudy conditions on free or cued recall;

however, the note revision groups made the most cross-lecture references. This

dissertation demonstrates that interpolated lectures improve lecture notes and note

Texas Tech University, Eevin Jennings, August 2018

vii

revisions, and that note revision for others improves conceptual integration. The results

inform online education, suggesting that interpolated lectures may more effectively keep

students’ attention, and the activities assigned during these pauses may facilitate different

types of learning.

Texas Tech University, Eevin Jennings, August 2018

viii

LIST OF TABLES

1 The Role of ICAP in the Current Experiment...........................................................5

2 Descriptive Variables for Each of the Lecture Segments........................................24

4.1 Note Quantity as a Function of Lecture Type and Activity.....................................31

4.2 Types of Note Revisions.........................................................................................35

4.3 Cued Recall Performance.......................................................................................38

4.4 Cued Recall Same-Segment Elaborations..............................................................38

4.5 Conceptual Integration Performance......................................................................39

4.6 Absolute Accuracy.................................................................................................40

4.7 JOL Ratings............................................................................................................41

Texas Tech University, Eevin Jennings, August 2018

ix

LIST OF FIGURES

1 ICAP Framework (Chi & Wiley, 2014)....................................................................3

2 Overview of the Procedure for Experiment 1.........................................................26

4.1 Temporal Distribution of Notes as a Function of Condition...................................33

4.2 Temporal Distribution of Free Recall as a Function of Condition...........................35

Texas Tech University, Eevin Jennings, August 2018

1

CHAPTER I

INTRODUCTION

Although instructors vary significantly in their instructional design choices, a

common goal for all of them is to help students learn the course material. In an endeavor

to increase pedagogical outcomes, research focuses on both instructor and student

cognition and behavior to enhance course outcomes. In this vein, the utilization of the

internet for lecture delivery has emerged, bringing forth a myriad of platforms such as the

flipped classroom, hybrid courses, blended learning, and complementary sources for

students to use outside of regular lecture.

Live-streaming video lectures (also referred to as “webinars”) are increasingly

utilized to deliver information in a similar lecture-style pace. Students are expected to

attend the lectures at designated times without the opportunity to pause or rewind the

lecture, echoing the immediacy of face-to-face lectures while simultaneously reaching

students from various locations. This type of lecture is utilized not only in formal

university courses, but also freely as Massive Open Online Courses (MOOCs) (Breslow

et al., 2013). Still, unanswered questions remain regarding how instructors can improve

learning from online lectures.

The present study questions whether typical 50-minute, uninterrupted video

lectures provide the greatest learning benefits for students. Technically, a 50-minute

period may be too long of a duration to fully help students learn (Cepeda, Pashler, Vul,

Wixted, & Rohrer, 2006; Di Vesta & Smith, 1979). The rapid, unbroken succession of

novel, incoming ideas in a lecture may not afford the average student enough time to

Texas Tech University, Eevin Jennings, August 2018

2

engage in generative, meaningful learning; that is, to select, encode, organize, and

connect the material to their prior knowledge (Aiken, Thomas, & Shennum, 1975;

Fiorella & Mayer, 2015b; Foulin, 1995). Cognitive processing limitations during lecture

delivery can then result in memory decay, especially with low-capacity or disabled

individuals (Bui & Myerson, 2014; Ruhl, Hughes, & Gajar, 1990). Therefore, a major

question in this study is whether methods from distributed learning in the form of

“breaking up” long lectures can be combined with both traditional and novel learning

activities to aid students’ cognitive processing and retention of video presentations.

Interaction with lecture material often results in enhanced memory and

comprehension for that material (Craik & Tulving, 1975; Mayer, 2002, 2008; Prince,

2004), since it is more likely to evoke deeper processing (Craik & Lockhart, 1972;

Gardiner, Craik, & Bleasdale, 1973; Lockhart & Craik, 1990; Tyler, Hertel, McCallum,

& Ellis, 1979). The interactive-constructive-active-passive (ICAP) framework (Chi,

2009) provides a fine-grained, cognitive-based explanation for different aspects of the

learning process. Specifically, ICAP proposes that various learning activities can promote

different types of interaction with material. That is, there are four separate modes of

engagement, from the deepest-learning to least: interactive, constructive, active, and

passive. Each level is subsumed by the next (i.e., passive learning is inherently required

for active learning, and so forth) and, based on overt behaviors, is expected to invoke

some type of knowledge change. These knowledge changes are measured by performance

on corresponding cognitive outcomes (see Figure 1).

Texas Tech University, Eevin Jennings, August 2018

3

Figure 1. ICAP Framework (Chi & Wiley, 2014)

First, passive modes of engagement are defined by encoding information in an

isolated way, such that facts are not related to one another nor to prior knowledge (Chi &

Wylie, 2014). Students who use passive modes of learning may be able to recall

individual facts but do not form coherent representations nor constructs with them

(Menekse, Stump, Krause, & Chi, 2013). A key example of a learning activity that

promotes passive engagement is attentively listening to a lecture without further overt

learning activities (i.e., not attending to main points more than details, etc.). Students in

this case don’t integrate the information or manipulate it in any way, which characterizes

the activity (generic listening) as a passive modality.

In ICAP (Chi & Wylie, 2014) active engagement is characterized by the

manipulation of information, which causes knowledge-change processes such as

integration. Integration can be achieved through activities such as underlining,

Texas Tech University, Eevin Jennings, August 2018

4

rehearsing, and copying/reproducing content, resulting in increased capacity to recall

information in a coherent narrative (as opposed to isolated statements).

Third, constructive engagement requires learners to extend their knowledge

beyond the content that is presented. Constructive engagement promotes the knowledge-

change processes, generation or inference, which mandates production of new content.

Inference can be achieved through learning activities that invite students to expound upon

what was presented rather than simply remember it (Bruchok, Mar, & Craig, 2016). Such

activities consist of generating explanations, creating examples, and elaborating upon

available information. Subsequently, Chi and Wylie (2014) assert that constructive

engagement is a likely candidate to facilitate more meaningful processing such as a

strong mental representation, comprehension, conceptual interrelation, transfer, and

schema change, unlike active or passive processing.

Finally, interactive engagement includes direct contact and dialogue with other

learners such that both learners cooperate to add individual components to the construct

(Chi, 2009; Evans & Cuffe, 2009). This causes knowledge-change processes known as

co-inference, allowing the integration of a partner’s additional knowledge to one’s own,

as can be gained through activities such as group discussion or “jigsaw”-type activities

(Chi & Menekse, 2015). Because the video lecture in this study is viewed independently,

as is often the case in educational settings, the interactive construct does not come into

play in the present study. Overall, the ICAP model will be used to motivate this study

and interpret the results, with particular attention to the passive, active, and constructive

engagement modes in the ICAP model.

Texas Tech University, Eevin Jennings, August 2018

5

Classifying learning activities through an ICAP lens necessitates examination of

additional factors before concluding the mode of engagement that is associated with the

activity. To assess whether learning activities can be categorized as passive, active, or

constructive requires three contextual considerations: A) identifying the specific type of

activity implemented during learning, B) determining the enactment of the activity (how

participants carry out the task given the experimental constraints), and C) determining the

cognitive outcomes of that activity. The three experimental manipulations used in this

study and the related ICAP factors are summarized in Table 1.

Table 1. The Role of ICAP in the Current Experiment

Learning

Activity Specific Type Enactment

Cognitive

Outcome

Self-testing Free recall Verbatim transcription,

summarization, elaborative

retrieval, explaining

(All)

Performance on

criterion tests:

Free recall, cued

recall, and

integration

Note

revision

For self, for others Re-writing verbatim notes,

re-organizing, adding

additional lecture pieces,

elaborating, inferring,

drawing

Restudy Rereading notes Re-reading notes verbatim,

selectively re-reading certain

components, covertly

practicing retrieval

Summary. The present study examines whether, in the context of live-streaming

video lectures such as webinars, segmented (interpolated with activity) video lectures are

more effective than continuous video lectures. Few studies have directly compared both

Texas Tech University, Eevin Jennings, August 2018

6

lecture types, and those that do have contrasting outcomes (Coats, 2016; Di Vesta &

Smith, 1979; Luo, Kiewra, & Samuelson, 2016). Further, no research has examined this

particular question as it pertains to live-streaming “webinar” video lectures, which are

utilized frequently in flipped, hybrid, and online courses (Breslow et al., 2013). The

present study also tests and extends recent findings regarding the role of self-testing in

learning, and builds upon research on notetaking, note revision, and perceived peer

involvement to propose a novel learning activity (note revision for others). The

background literature for each component and its place within the ICAP framework is

described next.

Background for the Current Study

The experimental design of this study is a 2 (Lecture type: interpolated,

continuous) X 3 (Activity: note revision for others, self-testing, restudy) between-subjects

factorial design. There are two overarching experimental questions. The first question is

whether interpolated lectures are superior to continuous lectures for all dependent

variables, which is a comparison that has not been investigated with webinar-type video

lectures. The second general question is whether revising one’s lecture notes with the

intention to provide them to another participant is superior to self-testing, and whether

these potential advantages apply to notes and criterion tests differentially. An interpolated

testing effect is expected to occur as a replication from recent research (Jing, Szpunar, &

Schacter, 2016), and the new activity (note revision for others) should aid in conceptual

integration more so than self-testing (discussed below).

Texas Tech University, Eevin Jennings, August 2018

7

In the following sections, I present the theoretical and empirical issues in the

research literature motivating interpolated lectures, self-testing, and note revision for

others. Then I describe the experimental approach to addressing these issues. First,

cognitive hindrances are described to illustrate why continuous lectures are impractical,

and thus, the motivation for live-streaming lecturers to utilize interpolation as the

solution is justified. Second, problems are presented for three common types of activities

students and instructors use to enhance lecture learning. These include self-testing, note

revision, and peer involvement, which then lead to the proposed solution to revise notes

with the intention of providing them to another participant. Finally, dependent variables

are introduced and rationalized.

Interpolated Lectures

The empirical comparison of interpolated to continuous lectures in this

dissertation is warranted due to several known cognitive detriments encountered during

lectures. For example, students only retain information from the first 10 minutes of

lecture (Hartley & Davies, 1978). This effect is theoretically driven by the presence of

three primary factors, which are described next. Although an established idea, continuous

video lectures reliably elicit higher rates of mind-wandering in students (Risko,

Anderson, Sarwal, Engelhardt, & Kingstone, 2012; Seli, Carriere, & Smilek, 2015;

Szpunar, Moulton, & Schacter, 2013; Wilson & Korn, 2007). Mind-wandering results in

lack of engagement, and therefore sparse memory for parts of the lecture. Proactive

interference, which is defined as an inverse relationship between learning new

information and the number of prior learning trials (Kane & Engle, 2000; Mayer &

Texas Tech University, Eevin Jennings, August 2018

8

Moreno, 2003; Nunes & Weinstein, 2012; Watkins & Watkins, 1975; Wixted, 2004;

Wixted & Rohrer, 1993), is a significant negative factor in video lectures as well. Due to

the nature of information processing, mental resources are quickly depleted as the lecture

progresses (Mayer & Moreno, 2003; Sweller, 1994; Wixted, 2004). This issue is

multiplied when the content is novel, complex, and/or delivered at a fast rate (Aiken et

al., 1975). Unless learners implement and sustain an advanced learning strategy or

possess extensive background knowledge (Fiorella & Mayer, 2015), they will fall victim

to working memory overload, and subsequently proactive interference, during a college-

level science lecture. Additionally, compared to face-to-face lectures, students exhibit

increased amounts of both mind-wandering and proactive interference when learning

from video lectures (Jing et al., 2016; Schacter & Szpunar, 2015). Finally, and

paradoxically, learning during lectures can be further hindered by cognitive load caused

by notetaking (Aiken et al., 1975; Bui & Myerson, 2014; Di Vesta & Gray, 1973; Di

Vesta & Smith, 1979; Piolat, Olive, & Kellogg, 2005). Together, proactive interference,

mind-wandering, and notetaking suggest that continuous video lectures create

unmanageable levels of cognitive load (Paas, Renkl, & Sweller, 2003; Sweller, Ayres, &

Kalyuga, 2011).

Previous studies have altered lectures to resolve some of these obstacles. Di Vesta

and Smith (1979) discovered that the “pause procedure” (where students discussed the

lecture content with peers during scheduled lecture breaks) was more effective than

discussing content after lecture. Short writing assignments (Butler, Phillmann, & Smart,

2001), quizzes (Roediger, Agarwal, McDaniel, & McDermott, 2011), and clicker

Texas Tech University, Eevin Jennings, August 2018

9

responses (Bunce, Flens, & Neiles, 2010; Mayer et al., 2009) have been popular and have

resulted in positive outcomes, such as increased exam grades, reduced proactive

interference, and integration of concepts (Jing et al., 2016; Narloch, Garbin, & Turnage,

2006; Padilla-Walker, 2006; Roediger, Agarwal, et al., 2011; Szpunar, McDermott, &

Roediger, 2007; Weinstein, Gilmore, Szpunar, & McDermott, 2014).

In sum, the primary motivation behind interpolated lectures stems from the

distributed practice literature, which proposes that segmentation can reduce cognitive

load by presenting portions of information (sequentially) rather than uninterrupted,

massed versions (Clark & Mayer, 2010; Florax & Ploetzner, 2010; Johnson & Mayer,

2009; Lusk et al., 2009; Mayer, 2008, 2010; Mayer & Alexander, 2011; Mayer &

Moreno, 2003). There are no studies that directly compare continuous with interpolated

video lectures. Critically, studies assessing the impact of interpolation only compare two

interpolated activities, as observed in interpolated testing (Jing et al., 2016; Szpunar,

Jing, & Schacter, 2014; Szpunar, Khan, & Schacter, 2013) and pause procedure studies

(Bachhel & Thaman, 2014). Neither of these projects compared interpolated testing to

post-lecture testing. Therefore, a true comparison of an interpolated versus continuous

lecture is warranted to investigate whether interpolation has inherent learning-

enhancement properties.

Self-Testing

In addition to altering the lecture type (continuous versus interpolated), many

different learning activities can be implemented to further enhance learning (Dunlosky,

Rawson, Marsh, Nathan, & Willingham, 2013). However, recent analyses have

Texas Tech University, Eevin Jennings, August 2018

10

demonstrated that whether these activities are effective is highly dependent on individual

factors (such as background knowledge and motivation), requires some degree of

training (concept-mapping, summarization), and/or is dependent on others’ contributions

to group work (Bruchok et al., 2016; Dunlosky et al., 2013; Fiorella & Mayer, 2015a,

2015b).

Self-testing, most notably free recall, has for many years held an acclaimed title

as the most practical and effective method for learning and retention (Blunt & Karpicke,

2014; Carpenter, Pashler, Wixted, & Vul, 2008; Karpicke & Blunt, 2011; Karpicke &

Roediger, 2007, 2008; McDaniel, Roediger, & McDermott, 2007; Roediger, Putnam, &

Smith, 2011; Zaromb & Roediger, 2010). Indeed, self-testing is reliably better for

retention than non-selectively (i.e., passively) restudying the material due to the desirable

difficulties it requires (Bjork & Bjork, 2011) as well as direct and indirect effects

(Rowland, 2014).

However, criticism has recently emerged arguing that self-testing is unlikely the

esteemed, generative process some claim it is. Using an ICAP protocol, Bruchok et al.

(2016) challenged the notion that self-testing (in the form of free-recall) elicits

constructive engagement. The consensus of this argument is that free-recall promotes

“potent, but piecewise, fact learning” (Pan, Gopal, & Rickard, 2016), which Bruchok et

al. (2016) characterized as passive engagement. Other studies have emerged with similar

challenges, claiming that free recall enhances retention, but not integration, application,

or inference, especially when learning material is complex, is delivered over a long

period of time (i.e., 30 minutes), and/or has high elemental interactivity (Agarwal, 2011;

Texas Tech University, Eevin Jennings, August 2018

11

Roelle & Berthold, 2017; Sweller, 2010; Tran, Rohrer, & Pashler, 2015; Van Gog &

Sweller, 2015; Wooldridge, Bugg, McDaniel, & Liu, 2014).

The studies on interpolated testing assessed long-term memory for lecture

material using brief delays (5-10 minutes), during which participants practiced distractor

tasks. Although an effect for testing was still observed, research examining interspersed

testing with text yielded opposite effects after a longer delay was implemented (Wissman

& Rawson, 2015; Wissman, Rawson, & Pyc, 2011). Therefore, a critical component in

the current study assessed whether effects from the manipulations were observable after a

24-hour delay.

Note Revisions for Others

Notetaking and note studying are of significant importance in content learning

(Benton, Kiewra, Whitfill, & Dennison, 1993; Kiewra, Dubois, et al., 1991). When

notetaking is effective, learning takes place because of notetaking and produces a so-

called “encoding” effect. However, there are cognitive difficulties encountered with

learning while notetaking. When taking notes, students must engage in several processes

simultaneously, such as selecting, organizing, and then transcribing lecture material in a

timely manner (Peverly et al., 2013). In addition to the cognitive load imposed by

notetaking (Aiken et al., 1975; Bretzing & Kulhavy, 1979; Piolat et al., 2005), students

today are less adept at employing self-regulatory learning behaviors during notetaking

(Peverly, Brobst, Graham, & Shaw, 2003) and are less physiologically capable of

overcoming these challenges due to slow transcription speed (Bassili & Joordens, 2008;

Connelly, Dockrell, & Barnett, 2005). Because of this cognitive exhaustion, students

Texas Tech University, Eevin Jennings, August 2018

12

encounter fatigue effects early on in the lecture (Hartley & Davies, 1978), resulting in

sparse notes containing only around 35% of the lecture’s points (Kiewra, Mayer,

Christensen, Kim, & Risch, 1991; Luo et al., 2016). The likelihood that students will

remember information outside of what they transcribed into their notes is next to none

(Bui & Myerson, 2014; Peverly et al., 2003b; Peverly et al., 2013).

In order to address cognitive processing issues associated with notetaking,

instructors can appoint students to work together on a task. Although video lectures are

designed to be viewable from locations other than classrooms (Copley, 2007; Lyons,

Reysen, & Pierce, 2012), instructors frequently assign learners to work together (either

electronically or in-person) on various projects (Comer, Clark, & Canelas, 2014; So &

Brush, 2008). Despite the many advantages in the collaborative learning literature

(Cranney, Ahn, McKinnon, Morris, & Watts, 2009; So & Brush, 2008), whether peer

involvement positively affects learning is highly dependent on the students’ individual

differences (Chi & Menekse, 2015), anxiety toward the activity (Renkl, 1995), as well as

when it occurs during learning (Di Vesta & Smith, 1979). At the very least, partner

involvement benefits from some form of training and/or partner matching (Fiorella &

Mayer, 2015a, 2015b; Luo et al., 2016). I propose that a novel manipulation, note

revision for others, will yield differential learning gains from a video lecture than self-

testing. Next, I turn to evidence in support of this new learning activity.

When students miss lecture opportunities, they will commonly ask peers for a

copy of their notes. Similarly, when students with learning accommodations need help

with notetaking, instructors ask for peer volunteers to take notes for those students. Few

Texas Tech University, Eevin Jennings, August 2018

13

argue against the benefits of reviewing the peers’ notes (Carter & Van Matre, 1975; Di

Vesta & Gray, 1972), but no research to date has examined how the peer notetakers’

learning may be affected with such an assignment. It is from these scenarios that the

novel intervention, note revision for others, is motivated.

Although laptops are increasingly used in classrooms (Fink III, 2010),

handwriting is still General Psychology students’ primary notetaking method (Jennings &

Taraban, unpublished data). Most students have never been given instruction on

notetaking strategies (Williams & Eggert, 2002), and consequently, numerous factors

influence whether the encoding benefit (memory enhancements derived from transcribing

notes in the first place) is effective long-term (Bui & Myerson, 2014; Fisher & Harris,

1973; Rickards & Friedman, 1978). Thus, it makes sense to focus on improving the

notetaking process by targeting note revision. Since taking more handwritten notes

predicts higher performance (Peverly et al., 2007), note revision could serve as a

mechanism through which to extend note quantity and quality without resorting to

passive, verbatim strategies. Further, note revision could entice learners to adopt a more

global evaluation toward how well their notes portray the lecture’s points, and thus, result

in increased conceptual clarity and relatedness.

Unfortunately, the role of note revision has only recently been explored, and the

modest benefit for individual note review implies that at this point too little is known

about how to master its potential. One caveat, which marries the two constructs of note

revision and peer involvement, is the finding that interpolating both of these factors

produced a significant advantage for performance (Luo et al., 2016). In consideration of

Texas Tech University, Eevin Jennings, August 2018

14

the constraints described about both partner involvement and video lecture learning, I

proposed that the new combination (revising notes with the expectation to give them to a

peer for study) would foster learning benefits through perceived social presence, note

revision, and successively, active or constructive engagement. This possibility remains

untested and is therefore a central question in the current experiment. Further, implied

dependence of another peer (occasionally referred to as a fictitious other), can be enough

of a motivation to instigate meaningful learning in the learner regardless of whether that

peer is ever actually involved (Daou, Buchanan, Lindsey, Lohse, & Miller, 2016;

Gregory, Walker, Mclaughlin, & Peets, 2011; Hoogerheide, Deijkers, Loyens, Heijltjes,

& van Gog, 2016; Nestojko, Bui, Kornell, & Bjork, 2014; Risko & Kingstone, 2011).

In some cases the anticipation of peer involvement is debilitating, whereas in

others it is beneficial. These differences may depend on the activity participants are

asked to perform. Participants who revise their notes with the expectation of providing

them to another participant should benefit from similar meaningful engagement processes

(seeking connections, asking questions, metacognitive awareness, active or constructive

processing) presumed to benefit teachers differentially than pupils (Gregory et al., 2011),

but without the detriments associated with actual peer involvement. Benefits from note

revision and meaningful engagement were expected to occur tacitly.

Overview

Because interpolated self-testing has been shown to enhance learning from a

video lecture compared to interpolated restudy (Jing et al., 2016), I expected a replication

of this finding in the current experiment. However, these studies did not assess whether

Texas Tech University, Eevin Jennings, August 2018

15

learners employed different types of engagement modes. Free and cued recall were used

as criterion tests with the additional prompt to relate cued-recall items to other portions of

the lecture in order to assess conceptual integration, a cognitive outcome that ICAP

claims may be driven by active and constructive engagement. Novel analyses proposed in

this dissertation assessed additional learning factors (described in the dependent variables

section) as well as compared them to the outcomes of note revision for others.

Dependent Variables

In this section, I will describe the variables I measured in the dissertation.

Note Quantity and Temporal Distribution

In the studies on interpolated testing, experimenters gave participants completed

handouts of the lecture’s PowerPoint slides for notetaking. Although participants took

more notes in the self-test conditions, an important consideration of distributing

PowerPoint slides is that these slides do not facilitate the generative effects of

transcribing notes for oneself (Hartley, 1976; Katayama & Robinson, 2000; Kim, Turner,

& Pérez-Quiñones, 2009). In the present experiment, participants generated notes without

additional aids, as is often the case in learning contexts. Further, the demands imposed on

notetakers during lecture may result in fatigue effects in continuous, but not interpolated,

lectures, evidenced by more notetaking throughout the middle and end of the lecture

rather than just the beginning. Therefore, in the present study the origin of notes relative

to the lecture was also assessed, through a variable termed temporal distribution.

Lecture note factors have not been investigated in the literature on interpolated

video lecture learning. In the present experiment, I examined how lecture type and

Texas Tech University, Eevin Jennings, August 2018

16

activity affected note quantity and temporal distribution when participants wrote their

notes by hand. Luo et al. (2016) showed that the number and type of revisions (lecture

information versus elaborations) made in the note revision groups varied depending on

partner involvement and when the revisions took place. Therefore, the examination of

note quantity, temporal distribution, and revision type were new analyses under the

present set of parameters.

Free Recall Quantity and Temporal Distribution

Many studies on memory incorporate free recall as a standard for memory

retention. Since free recall reveals stored content that may otherwise be inaccessible in

cued-recall (Tulving & Pearlstone, 1966), it was included as one form of memory

assessment (free recall quantity). Recent studies assert that interpolated testing reduces

proactive interference, which is qualified as free-recall performance on the final lecture

segment only. In these experiments, memory for the beginning and middle of the lecture

is only assessed with cued-recall. Since other studies have shown that most students

forget information toward the middle and end of the lecture (Hartley & Davies, 1978),

another aim of the dissertation was to extend this assumption to free recall for the entire

lecture (free recall temporal distribution).

Cued Recall

Similarly to the motivation for free recall, cued recall can reveal insights to the

location and degree of memory for target lecture material. To implement cued recall, Jing

et al. (2016) first presented participants with Power Point slides from the lecture and then

asked them to elaborate on the information in the slides. They found that interpolated

Texas Tech University, Eevin Jennings, August 2018

17

self-testers recalled more lecture information about the slide content than those who

studied during the interpolated pauses. Thus, I expected to replicate the interpolated

testing effect. Due to the presumed types of cognitive processing that notetaking and

revising require, I also expected participants in the note revision groups to perform better

than the restudy and self-testing conditions on cued recall.

Conceptual Integration

Interpolated testing increased rates of segment “clustering” in final recall

compared to participants who completed unrelated distractor tasks (Szpunar, McDermott,

& Roediger III, 2008) or studied their notes (Jing et al., 2016; Szpunar et al., 2014). In

Jing et al.’s (2016) paper, integration was measured in two ways: for free recall, instances

in which participants included a direct reference to another portion of the lecture, and for

cued recall, by the amount of relevant elaboration generated when presented with a

lecture slide and asked to expound on how it related to other parts of the lecture.

Interpolated self-testers made more integration statements than the interpolated restudy

group.

While I aimed to replicate this finding with interpolated testing, I also predicted

that integration performance would be higher among note revision participants. This

prediction is in light of recent challenges toward free-recall’s effectiveness as a learning

tool (Agarwal, 2011; Mintzes et al., 2011; Pan et al., 2016; Roelle & Berthold, 2017).

Bruchok et al. (2016) compared self-testing to learning strategies that directly fostered

constructive processing (generating explanations for close family members). On

inference and application questions, participants who constructed explanations

Texas Tech University, Eevin Jennings, August 2018

18

outperformed those who engaged in self-testing. Other authors have supported the notion

that according to the ICAP framework, self-testing emphasizes concept accessibility, but

not assimilation (Fiorella & Mayer, 2015b).

Metacognitive Judgments

Szpunar et al. (2014) stated that interpolated tests significantly reduced

overconfidence and increased calibration compared to participants who were not tested.

This is important because judgments of learning (JOLs) can indicate whether learners’

inaccurate memory perceptions can be traced back to their prescribed learning activities

(Son & Metcalfe, 2000). Therefore, I expected the interpolated testing group to yield

more precise absolute accuracy and lower JOLs compared to other conditions. Since

research on metacognition and notetaking is scant, it was unknown whether similar

effects would stand for note revision.

Hypotheses Related to Interpolated Lectures

Hypothesis 1. I predicted a benefit for interpolated lecture type on all of the

dependent variables:

H1.1. Note quantity

H1.2. Note temporal distribution

H1.3. Note revision quantity

H1.4. Note revision type (elaborative, proof-reading, lecture-based, visual)

H1.5. Free recall quantity

H1.6. Free recall temporal distribution

Texas Tech University, Eevin Jennings, August 2018

19

H1.7. Cued recall target performance

H1.8. Same-segment elaborations

H1.9. Integration

Interpolated lectures, regardless of activity, may produce higher outcomes than

those that are continuous. Performance on post-lecture activities could be driven by

interpolation’s cognitive-processing properties, such as episodic cues, “chunking,” and

consolidation due, in part, to reduced proactive interference.

Hypotheses Related to Interpolated Lecture Notes

Hypothesis 2.1. The second hypothesis focused on the effect of activity type in

note outcomes. Jing et al. (2016) found that interpolated self-testing produced more notes

than interpolated restudy, and Luo et al. (2016) found that interpolated note revisions

produced more notes than post-lecture note revisions; therefore, I predicted that the

interpolated note revision group would record the most notes (sans revisions), and the

interpolated self-testing group would record more notes than the interpolated restudy

group (Hypothesis 2.2, a replication from Jing et al., 2016).

Hypothesis 3. I predicted that the trend from hypothesis 2.1 would also hold true

for temporal distribution, such that interpolated note revisers would continue to transcribe

more notes than the interpolated self-test and restudy conditions throughout the beginning

and end of the lecture as well.

Texas Tech University, Eevin Jennings, August 2018

20

Hypotheses Related to Free Recall and Temporal Distribution

Hypothesis 4. First, transfer-appropriate processing theory suggests that learning

trials that mimic criterion tests result in the best performance (Morris, Bransford, &

Franks, 1977). Therefore, participants in the self-test groups were expected to be more

productive than restudy and note revision on the final free recall test.

Hypothesis 5. Second, because interpolated self-testing is hypothesized to protect

learners against proactive interference (Szpunar et al., 2008) (previously qualified as

performance on only the last 5 minutes of lecture), I expected interpolated self-testers to

recall more information from the beginning, middle, and end of the lecture than those

who restudied.

Hypotheses Related to Cued Recall and Integration

Hypothesis 6.1. Based on the evidence for teaching expectancy effects (Nestojko

et al., 2014) and constructive learning (Chi, 2009), I predicted a main effect of activity on

cued recall, such that the note revision groups would yield higher performance than self-

testing on cued recall target and (Hypothesis 6.2) same-segment elaboration when

prompted to elaborate upon the target topic.

Hypothesis 7. Whereas free recall may enhance fact retention (Pan et al., 2016),

note revision for others may promote more active/constructive, relational, and

comprehension-driven memory networks that are best accessed with relation-based

assessments (Morris et al., 1977). Therefore, since note revision for others possibly

highlights the interconnectivity between concepts, I presumed that when prompted in the

Texas Tech University, Eevin Jennings, August 2018

21

integration criterion, note revision would generate more conceptually integrative (cross-

lecture) statements than those who self-tested or restudied.

Hypotheses Related to Metacognition

Hypothesis 8.1. In congruence with the retrieval and metacognition literature, I

expected to replicate greatest absolute accuracy for the self-testing conditions (King,

1991; Szpunar et al., 2014). Second, restudying exacerbates the quantity of erroneous

cues learners use to estimate their retention (Carpenter, 2009; Cull, 2000; Dunlosky et al.,

2013; Thomson & Tulving, 1970). Note revision potentially combines restudying and

retrieval/construction (Luo et al., 2016; Williams & Eggert, 2002). Therefore, I envisaged

that the note revision conditions’ absolute accuracy scores would fall between self-testing

and restudy (Hypothesis 8.2).

Hypothesis 9.1. Learners are reliably overconfident directly after learning or

studying testing information (Son & Metcalfe, 2000). Although I expected all conditions

to be overconfident directly following the learning session on day 1, I predicted that the

self-testing conditions would be less overconfident than the note revision and restudy

groups. Second, delayed-JOL literature suggests that participants’ predictions should be

significantly lower after the 24-hour delay (Nelson & Dunlosky, 1991). In combination

with research demonstrating that retrieval reduces overconfidence (Szpunar et al., 2014),

self-testers were expected to make lower JOLs than the note revision and restudy groups

after a delay (Hypothesis 9.2).

Texas Tech University, Eevin Jennings, August 2018

22

CHAPTER II

METHOD

Participants

One hundred and eighty volunteers (46 male, 134 female) from the Texas Tech

University General Psychology pool participated for course credit. Participants were on

average 18.72 years of age and all were at least 18 years old. On average, participants had

completed 27.12 total credit hours (SD = 20.64), had less than 1 hour of formal

experience with video-based academic lectures, and reported little to no prior knowledge

over the topic of language development.

Design

The design was a 2 (Lecture type: interpolated, continuous) x 3 (Activity:

notetaking with note revision, notetaking with self-testing, notetaking with restudy)

between-subjects factorial. Participants were randomly assigned to one of six conditions

totaling 30 participants per condition. The dependent variables from the lecture portion of

the experiment included total number of words (including short-hand abbreviations) and

lecture ideas transcribed into notes, note quality (temporal distribution of notes in relation

to the lecture segments), and number and type of note revisions for the revision group

(number of lecture-based additions versus external elaborations). For the criterion

performance, dependent variables included free recall quantity (number of correct idea

units recalled) and quality (temporal distribution), cued recall (retrieving correct

information in response to a prompt, correct elaboration upon each concept to same-

segment information), and conceptual integration (correct elaborations to lecture

information outside of the target lecture segment).

Texas Tech University, Eevin Jennings, August 2018

23

Materials

The experiment was conducted using Qualtrics software and a Windows

computer. The overall instructions informed participants that they would be watching a

30-minute video lecture over the topic of language development and taking notes in

preparation for a subsequent test. For notetaking, blank sheets of lined notebook paper

and a black pen were provided. A separate red pen was provided for the note revision

conditions to distinguish original versus additional notes (Luo et al., 2016). All testing

took take place on a computer in the Qualtrics experiment.

The lecture was a 30-minute video over the topic of Language Development,

taught by Dr. Jeanette Norden from the Great Courses series, and featured the instructor

lecturing a class from a podium. The delivery rate for the lecture was approximately 120

words per minute, which is average for college instructors (Foulin, 1995; Wong, 2014).

The lecture was divided into six segments of an average length of 5 minutes and 9

seconds each. A master code containing the lecture’s 261 unique idea units, main ideas,

important details, and less-important details were identified in a previous norming

experiment by an independent group of participants from the same population as the

participants in the present investigation (see Appendix 2). The descriptive variables for

each segment can be observed in Table 2. The JOL prompt consisted of a scale of 0-100.

Texas Tech University, Eevin Jennings, August 2018

24

Table 2. Descriptive Variables for Each of the Lecture Segments

Criterion tests were designed as follows. The free recall prompt asked participants

to recall as much information as possible from the lecture. The cued-recall portion

presented participants with two topics from each lecture segment and asked them to A)

elaborate on the topic presented and B) elaborate upon how it related to information from

other parts of the lecture (to assess integration). The topic selected for each topic prompt

was based on information that is distributed throughout the lecture, but also concepts that

are semantically distinct from one another within the overall topic of language

development so as to reduce redundancy. Further, analyses of free recalls and note

content from previous experiments utilizing the lecture (using a continuous lecture)

substantiated the selection of these topics, as well as evaluations of semantic relatedness

to both the topic’s particular lecture segment and entire lecture using LSA scores

(Landauer, 2006) (see Appendix 3).

Segment Length

(seconds)

Words

(total)

Idea

Units

(total)

Main

Ideas

Important

Details

Less-

Important

Details

1 296 655 39 2 13 11

2 322 688 46 2 22 15

3 310 752 34 3 17 11

4 359 812 41 1 24 8

5 299 768 47 0 14 16

6 272 674 49 2 12 11

Mean 309 724 42 1.66 17 12

Texas Tech University, Eevin Jennings, August 2018

25

Finally, a demographics questionnaire assessed gender, age, credit hours, content

interest, and major. A full description of instructions and list of specific questions can be

found in Appendix 4.

Procedure

On day 1, participants entered the laboratory in groups of up to four and signed

consent forms. They were then assigned to a computer, which displayed the first page of

the instructions informing them that they would be watching a 30-minute video lecture

over the topic of language development. The instructions asked participants to take notes

however they normally would using the provided lined notebook paper and black pen,

because they would be tested over the information when they returned the next day.

Additionally, participants in the note revision condition were told that their notes would

be given (anonymously) to another participant, who would then study them before taking

the same tests. The instructions notified all participants that at any point, the computer

may randomly assign them to revise, clarify, and elaborate upon their notes (using the red

pen), recall as much information as possible from what they had just learned, or restudy

their notes, and that this could occur at any point during or at the end of the lecture.

However, in reality, they revised their notes, self-tested, or restudied either during or

after the lecture. This component of the procedure (“random selection” from the

computer) was adapted from Jing et al. (2016) to reduce expectation effects across

conditions. Participants were asked to clarify any questions at this point with the

researcher before clicking “next” to begin the video lecture.

Texas Tech University, Eevin Jennings, August 2018

26

Before beginning the video lecture, participants were informed that the video

would begin playing immediately, and that they could neither pause nor rewind it. All

participants were encouraged to take notes. Participants clicked “next” to begin the

experimental portion of the study (see Figure 2).

Figure 2. Overview of the Procedure for Experiment 1

For the video lecture portion of the experiment, the computer screen featured the

video lecture, and all participants had lined notebook paper and a black pen for

notetaking. For all three interpolated conditions, participants watched 5 minutes of the

lecture and were then directed to the activity prompt for their condition. Specifically,

participants in the interpolated note revision group read a prompt stating that they would

have 2 minutes to elaborate their notes, and that they should make any changes,

additions, and elaborations to help the other participant learn the information best

(adapted from Luo et al., 2016) using the red pen. Similarly, participants in the

interpolated self-test condition read a prompt instructing them to recall as much

Texas Tech University, Eevin Jennings, August 2018

27

information as possible from what they had just learned, and the restudy participants were

instructed to study their notes. Before the lecture resumed, a screen with the notification

that the lecture would continue in 10 seconds was displayed (allowing all participants to

ready their notetaking materials). This continued for a total of six segments.

For the continuous lecture conditions, the lecture continued without disturbance

for the full 30 minutes. Afterward, participants were then informed that they would have

12 minutes to revise their notes, recall, or restudy (instructions were identical to those in

the interpolated conditions).

After the video lecture portion of the experiment was complete, all participants

made a JOL for the final test. Finally, they handed their notes in to the experimenter

before leaving.

The following day, participants returned to the lab to complete the second portion

of the experiment. First, the same JOL prompt asked participants to again rate their

predicted performance on the various types of tests. Then, all participants were asked to

complete criterion tests in counterbalanced order (free recall before cued recall and

integration, or vice versa) on the computer. The order of cued recall/integration items was

randomized. After completing the tests, participants were automatically directed to a

demographics questionnaire. Finally, they were debriefed and thanked for their

participation.

Texas Tech University, Eevin Jennings, August 2018

28

CHAPTER III

STATISTICAL ANALYSES

Demographic Analyses

Descriptive statistics (mean, standard deviation) were obtained for the

participants’ demographic information, followed by a t-test (for gender) and one-way

ANOVAs to determine whether participants differed in pre-existing factors (credit hours,

classification, GPA, SAT/ACT scores). Previous experiments utilizing the same materials

demonstrated that a 2:1 ratio of female to male participants was expected, but none of my

prior analyses found any gender-based disparities in the dependent variables.

Checking Statistical Assumptions

First, using a univariate frequency distribution, outlier analyses were conducted

on the primary dependent variables. Data points that fell beyond the scope of normality

(more than three standard deviations from the mean) were transformed using the Winsor

method (Yaffee, 2002).

Second, assumption verifications preceded all primary analyses. Specifically, I

tested assumptions of normality, linearity, and homogeneity of variance for GLM

univariate and repeated-measures ANOVAs. Independence of groups was assumed since

there were six different conditions in the experiment, and I implemented random

selection and assignment. For brevity, each test’s assumptions are only discussed if

violated.

Texas Tech University, Eevin Jennings, August 2018

29

Statistical Methods

There were four primary types of tests conducted on the dependent variables. 2

(Lecture type: continuous, interpolated) x 3 (Activity: note revision, self-test, restudy) x 3

(Temporal distribution: beginning, middle, end) GLM repeated-measures analyses of

variance (ANOVAs) were applied to the primary dependent variables that were examined

in relation to temporal distribution. These variables were lecture notes and free recall.

Cued recall (target performance and related) was assessed using a 2 (Lecture type:

continuous, interpolated) x 3 (Activity: note revision, self-test, restudy) GLM factorial

ANOVA. Univariate ANOVAs were used to assess integration, JOLs, and absolute

accuracy and as subsequent analyses to examine main effects and interactions. Finally, a

t-test was applied to total note revision quantity for the two note revision groups, and

separate ANOVAs assessing revision types were applied to the interpolated and

continuous note-taking groups.

Data Coding

Experimental conditions were blinded to coders to prevent coding bias. Inter-rater

reliability was calculated for the dependent variables by first assigning two trained

research assistants to separately code 10% of each protocol type (two coders for notes,

two for free recall, and two for cued recall). Then, percent agreement was calculated.

Regardless of agreement on the first round, both raters and the experimenter discussed

ratings for the initial protocols to further increase agreement and answer any questions.

For the remainder of coding, the experimenter assigned both coders of a protocol type the

same protocols to code separately each week, followed by a weekly coding meeting

during which rating discrepancies were discussed and amended. Each pair of coders met

Texas Tech University, Eevin Jennings, August 2018

30

with the experimenter every week to review three random protocols (separately coded by

each person) together. Inter-rater reliability averaged .89 for notes, .85 for free recall, and

.94 for cued recall.

All participants’ notes were analyzed on two primary characteristics (note

quantity and temporal distribution), but only the note revision groups were assessed for

revision characteristics. After transcribing the hand-written notes into digital form, note

quantity was coded. Note quantity consisted of average number of “ideas” that matched

up to the idea units from the lecture. It is important to distinguish that for the note

revision conditions, the note quantity analysis only included the original, un-revised

notes, which were identifiable in black ink instead of red. Second, temporal distribution

was calculated based on the number of note ideas derived from the beginning, middle,

and end of the lecture (segments 1-2, 3-4, and 5-6, respectively). Note revisions were

scored based on overall number and type of note revisions (such as words added directly

from the lecture, elaborations from outside of the lecture). For details on note and

revision coding methodology, see Appendix 5.

Texas Tech University, Eevin Jennings, August 2018

31

CHAPTER IV

RESULTS

Demographics

A 2 (Lecture type: continuous, interpolated) x 3 (Activity: note revision, self-test,

restudy) GLM multivariate ANOVA confirmed that there were no initial differences in

the demographic variables. Results can be observed in Appendix 6.

Lecture Notes

Note quantity. A 2 (Lecture type: continuous, interpolated) x 3 (Activity: note

revision, self-test, restudy) GLM univariate ANOVA was used to assess differences in

the number of ideas transcribed from lecture. There was no main effect for activity,

F(2,174) = .75, p = .47, partial η2 = .009, but there was a significant main effect of lecture

type, F(1,174) = 4.40, p = .03, partial η2 = .02, indicating that participants took

significantly more notes during the interpolated lecture than those who took notes during

the continuous lecture, p = .03. The interaction of activity x lecture was not significant,

F(2,174) = .10, p = .90, partial η2 = .001 (for descriptives, see Table 4.1).

Table 4.1 Note Quantity as a Function of Lecture Type and Activity

Interpolated Continuous Mean

Note revision 52 (16.04) 45.93 (17.39) 48.96 (16.71)

Self-Test 48.5 (19.19) 44.43 (13.66) 46.46 (16.42)

Restudy 54 (23.43) 47.01 (18.19) 50.50 (20.81)

Mean 51.5 (19.70) 45.8 (16.38)

Note. Means (SD) represent number of ideas recorded in notes.

Temporal distribution. To assess temporal distribution, a 2 (Lecture type:

continuous, interpolated) x 3 (Activity: note revision, self-test, restudy) x 3 (Temporal

Texas Tech University, Eevin Jennings, August 2018

32

distribution: beginning, middle, end) GLM repeated-measures ANOVA was applied to

the number of ideas transcribed in notes. First, there was a main effect of lecture type,

F(1,174) = 4.77, p = .03, partial η2 = .03, showing that there was a difference in temporal

distribution for interpolated versus continuous lectures. There was also a main effect of

temporal distribution, F(2,348) = 130.18, p < .001, partial η2 = .43. Multiple comparisons

with Tukey HSD corrections showed that the differences between notes taken at the

beginning versus middle, p < .001, beginning versus end, p < .001, and middle versus

end, p < .001, were all significant. There was no main effect of activity, F(2,174) = .57, p

= .56, partial η2 = .01.

Second, the interaction for temporal distribution x activity was not significant,

F(2,174) = .29, p = .74, partial η2 = .03, but the interaction for temporal distribution x

lecture type was significant, F(1,174) = 8.28, p = .005, partial η2 = .05. Finally, the three-

way interaction for temporal distribution x activity x lecture type was not significant,

F(2,174) = .43, p = .65, partial η2 = .005.

To explore the interaction for temporal distribution x lecture type, I applied a

univariate ANOVA contrasting lecture type to notes at the beginning, middle, and end of

lecture. As predicted, there was no difference in the amount of notes taken at the

beginning of the lecture across both conditions, F(1,178) = .36, p = .54, partial η2 = .002,

and significantly more notes were taken throughout the middle F(1,178) = 7.67, p = .006,

partial η2 = .04, and end, F(1,178) = 7.63, p = .006, partial η2 = .04, of the lecture when

interpolated. Therefore, regardless of activity, interpolated lectures allowed participants

Texas Tech University, Eevin Jennings, August 2018

33

to continue notetaking throughout the lecture, whereas those in the continuous lecture

took most of their notes at the beginning (for descriptive, see Figure 4.1).

Figure 4.1 Temporal Distribution of Notes as a Function of Condition

Note. Error bars represent standard error. For lecture type, i = interpolated, c =

continuous. For activity, NR = note revision, ST = self-test, RS = restudy.

Number and type of note revisions. Luo et al. (2016) found that interpolated note-

revisers added more revisions during breaks in the class periods as opposed to after a

lecture, which is an effect I expected to replicate. In addition, since Luo et al. (2016)

found that note revision conducted during interpolated lectures resulted in not only more

note revisions but also more elaborative revisions, I predicted that this effect would be

mirrored with the different types of revisions in the current experiment. Revisions were

categorized as visual (arrows, circled/starred information, drawings, etc.), lecture-based

0

5

10

15

20

25

Beginning Middle End

Num

ber

of

Note

s

Temporal Distribution

i-NR

i-ST

i-RS

c-NR

c-ST

c-RS

Texas Tech University, Eevin Jennings, August 2018

34

(information added from the lecture that was not already present in notes), elaborative

(information added from outside of the lecture, such as personal examples), and proof-

reading (editing grammar/spelling/punctuation, re-writing notes verbatim).

I applied a 2 (Lecture type: interpolated, continuous) x 4 (Note revision type:

elaborations, lecture-based, proof-reading, visual) repeated-measures ANOVA to the note

revision conditions’ notes. The main effect of note revision type was significant, F(3,174)

= 30.95, p < .001, partial η2 = .35, as was the effect of lecture type, F(1,58) = 10.10, p =

.002, partial η2 = .15. The interaction for lecture type x note revision type was also

significant, F(3,174) = 4.07, p = .02, partial η2 = .06.

To examine the interaction, I applied three t-tests to compare interpolated to

continuous for lecture-based, elaborative, and visual note revisions. To account for

multiple comparisons, I implemented Bonferroni corrections and adjusted the alpha to

.01. In contrast to my prediction, there was no significant difference between lecture type

for elaborative revisions, t(58) = -.43, p = .66. Importantly, there was a significant

difference in number of visual revisions added, t(58) = 2.10, p = .04, but this effect was

invalidated after applying the Bonferroni corrections. Finally, the interpolated group

added significantly more lecture-based revisions than the continuous note revision group,

t(58) = 4.14, p < .001 (see Table 4.2).

Texas Tech University, Eevin Jennings, August 2018

35

Table 4.2 Types of Note Revisions

Lecture

Type Elaborations

Lecture-

Based

Proof-

Reading Visual Mean

Interpolated .33 (.74) 11.73 (6.12) 4.93 (4.97) 11.60 (13.11) 7.14 (6.23)

Continuous .46 (1.52) 6.03 (4.41) 3.86 (4.57) 6.24 (4.94) 4.10 (3.86)

Mean .40 (1.19) 8.88 (6.02) 4.40 (4.76) 8.92 (10.18)

Criterion Tests

All participants’ free recalls were scored for completeness, classification, and

temporal distribution (beginning, middle, and end of the lecture). Idea units present in

participants’ free recalls were assigned completion scores (0, .5, or 1), which were then

summed to represent each participant’s overall free recall score. More detail on the

coding process and criteria can be observed in Appendix 7.

Free recall quantity and temporal distribution. I applied a 2 (Lecture type:

interpolated, continuous) x 3 (Activity: note revision, self-test, restudy) x 3 (Temporal

distribution: beginning, middle, end) GLM repeated-measures ANOVA to the overall

number of idea units recalled. There was no effect of lecture type, F(1,174) = .003, p =

.96, partial η2 = .000, or activity, F(2,174) = .01, p = .98, partial η2 = .000, but the main

effect of temporal distribution was significant, F(2,348) = 72.08, p < .001, partial η2 =

.30. There was no interaction of lecture type x activity, F(2,174) = .67, p = .51, partial η2

= .008, or lecture type x temporal distribution, F(2,348) = .31, p = .72, partial η2 = .002.

The interaction for activity x temporal distribution was significant, F(2,174) = 4.18, p =

.01, partial η2 = .04. Finally, the three-way interaction for lecture type x activity x

temporal distribution was not significant, F(2,174) = .83, p = .43, partial η2 = .01.

Texas Tech University, Eevin Jennings, August 2018

36

Importantly, pairwise comparisons with Tukey’s HSD corrections showed that in

congruence with proactive interference theory, significantly more information was

recalled from the beginning of the lecture than the middle (p < .001) and end (p <.001).

More information was recalled from the middle than the end as well, p < .001. To

investigate the interaction of activity x temporal distribution, one-way ANOVAs (with

Tukey’s HSD corrections) for activity were applied to the beginning, middle, and end

free recall data. There was no significant difference in activity for the beginning,

F(2,177) = 1.40, p = .24, middle, F(2,177) = .19, p = .82, or end of the lecture, F(2,177) =

1.65, p = .19 (see Figure 4.2).

Figure 4.2 Temporal Distribution of Free Recall as a Function of Condition

Note. Error bars represent standard error. For lecture type, i = interpolated, c =

continuous. For activity, NR = note revision, ST = self-test, RS = restudy.

Cued recall. To score cued-recall responses, Jing et al. (2016) counted the total

number of factual statements participants recalled in reference to the presented Power

Point slides used in that study. That is, in Jing et al. (2016), participants were awarded 1

0

1

2

3

4

5

6

7

Beginning Middle End

Num

ber

of

Unit

s R

ecal

led

Temporal Distribution

i-NR

i-ST

i-RS

c-NR

c-ST

c-RS

Texas Tech University, Eevin Jennings, August 2018

37

point per correct factual statement, which were typically recalled as independent clauses

(Auld Jr & White, 1956; Goldman-Eisler, 1972). The same coding scheme was applied to

cued recall in this experiment. A total cued recall score was calculated based on the

number of independent clauses that correctly addressed the prompt. Cued recall was

divided into two sub-scores: target recall and same-segment elaborations. Target recall

consisted of the portion of recall that directly “defined” or operationalized the topic

presented and was represented by the percentage of correct prompts addressed (i.e.,

“75%” indicates a score of 9 correctly addressed prompts out of 12). Same-segment

elaborations accounted for additional correct statements from the same target segment

and is represented as the total sum of these elaborations. For more detailed information

on scoring cued recall, see Appendix 8.

A 2 (Lecture type: interpolated, continuous) x 3 (Activity: note revision, self-test,

restudy) factorial ANOVA was applied to the target and same-segment elaboration data.

First, for target performance, there was no main effect of lecture type, F(1,174) = .45, p =

.50, partial η2 = .003, a marginal effect of activity, F(2,174) = 2.35, p = .09, partial η2 =

.03, and no interaction for lecture type x activity, F(2,174) = 1.57, p = .21, partial η2 =

.02 (see Table 4.3).

Texas Tech University, Eevin Jennings, August 2018

38

Table 4.3 Cued Recall Performance

Activity Interpolated Continuous Mean

Note revision .50 (.18) .46 (.15) .48 (.17)

Self-Test .51 (.18) .59 (.19) .55 (.18)

Restudy .50 (.19) .52 (.19) .51 (.19)

Mean .50 (.18) .52 (.18)

Note. Means (SD) are depicted as proportion correct.

Second, for number of same-segment elaborations, there was no main effect of

lecture type, F(1,174) = .01, p = .89, partial η2 = .00, nor activity, F(2,174) = .56, p = .57,

partial η2 = .006, and the interaction of lecture type x activity was not significant,

F(2,174) = 1.76, p = .17, partial η2 = .02 (see Table 4.4).

Table 4.4 Cued Recall Same-Segment Elaborations

Activity Interpolated Continuous Mean

Note revision 28.16 (9.49) 24.16 (9.62) 26.16 (9.68)

Self-Test 23.53 (12.68) 27.33 (12.84) 25.43 (12.79)

Restudy 23.53 (10.49) 24.43 (13.17) 23.98 (11.81)

Mean 25.07 (11.06) 25.31 (11.94)

Integration. Jing et al. (2016) calculated integration by counting the total

instances in which participants made factual statements from parts of the lecture outside

of the target segment on which they were tested. Similarly, I assessed integration by

counting the number of correct independent clauses referencing other lecture segments

per each topic presented. I then performed a 2 (Lecture type: interpolated, continuous) x 3

(Activity: note revision, self-test, restudy) univariate factorial ANOVA to the overall

number of integration statements. The main effect of activity was significant, F(2,174) =

4.36, p = .01, partial η2 = .05, but lecture type was not, F(1,174) = .58, p = .45, partial η2

Texas Tech University, Eevin Jennings, August 2018

39

= .003. The interaction of lecture type and activity was not significant, F(2,174) = .01, p

= .98, partial η2 = .000. Multiple comparisons with Tukey HSD corrections revealed that

across both lecture types, note revision conditions generated significantly more

connective references than the restudy conditions, p = .01, but the differences between

note revision and self-test, p = .12, and restudy and self-test, p = 1.00, were not

significant (see Table 4.5).

Table 4.5 Conceptual Integration Performance

Activity Interpolated Continuous Mean

Note Revision 5.00 (3.14) 5.20 (3.63) 5.10 (3.37)

Self-Test 3.83 (2.72) 4.10 (2.65) 3.96 (2.67)

Restudy 3.33 (2.44) 3.54 (2.49) 3.54 (2.67)

Mean 4.02 (2.83) 4.35 (3.02)

Note. Means (SD) indicate number of cross-lecture statements.

Metacognition

Absolute accuracy. First, recall proportion was calculated with number of free

recall idea unit credits divided by 261 (total possible independent lecture idea units).

Second, absolute accuracy was calculated by subtracting the proportion of participants’

recall from their JOL ratings. In the subsequent analyses, a value of 100% represents

maximum overconfidence with 0% recall, whereas -100% represents maximum

underconfidence with a recall of 100%.

I applied a 2 (Lecture type: interpolated, continuous) x 3 (Activity: note revision,

self-test, restudy) univariate factorial ANOVA to absolute accuracy. There was no main

effect of lecture type, F(1,174) = .005, p = .94, partial η2 = .000, nor activity, F(2,174) =

1.56, p = .21, partial η2 = .02, and no interaction for lecture type x activity, F(2,174) =

Texas Tech University, Eevin Jennings, August 2018

40

1.77, p = .17, partial η2 = .02 (see Table 4.6). Therefore, JOL accuracy was not

differentially affected by lecture type or activity after a 24 hour delay.

Table 4.6 Absolute Accuracy

Activity Interpolated Continuous Mean

Note revision .40 (.18) .46 (.16) .43 (.17)

Self-Test .41 (.22) .34 (.17) .37 (.20)

Restudy .41 (.17) .43 (.17) .42 (.17)

Mean .41 (.19) .41 (.17)

Note. Means (SD) indicate proportions.

JOLs. To examine whether the independent variables impacted JOLs before and

after a delay, a 2 (Lecture type: interpolated, continuous) x 3 (Activity: note revision,

self-test, restudy) x 2 (Day: day 1, day 2) mixed-factorial repeated-measures ANOVA

was applied to JOLs. There was no effect of lecture type, F(1,174) = .47, p = .49, partial

η2 = .003, but the main effects of day, F(1,174) = 89.18, p = .000, partial η2 = .34, and

activity, F(2,174) = 3.24, p = .04, partial η2 = .04, were significant. There were no

significant interactions for day x lecture type, F(1,174) = .18, p = .18, partial η2 = .01,

day x activity, F(2,174) = 1.97, p = .14, partial η2 = .02, lecture type x activity, F(2,174)

= .94, p = .38, partial η2 = .01, or day x lecture type x activity, F(2,174) = .42, p = .65,

partial η2 = .005.

To examine the main effect of activity, multiple comparisons with Tukey HSD

corrections revealed that across both lecture types and day, the self-test conditions

reported lower memory estimations compared to the note revision groups, p = .06, but

Texas Tech University, Eevin Jennings, August 2018

41

there was no difference between note revision and restudy, p = 1.00, nor self-test and

restudy, p = .99 (see Table 4.7).

Table 4.7 JOL Ratings

Day 1 Day 2

Activity Interp. Cont. Total Interp. Cont. Mean

NR .55 (.19) .61 (.13) .58 (.16) .45 (.18) .50 (.16) .47 (.17)

ST .49 (.24) .49 (.17) .49 (.20) .45 (.22) .39 (.18) .42 (.20)

RS .56 (.20) .61 (.16) .59 (.18) .45 (.18) .47 (.18) .46 (.18)

Mean .54 (.21) .57 (.16) .45 (.19) .45 (.18)

Note. JOLs are presented as proportions of the lecture participants predicted they would

recall. For activity, NR = note revision, ST = self-test, and RS = restudy.

Texas Tech University, Eevin Jennings, August 2018

42

CHAPTER V

DISCUSSION

The primary purpose of this dissertation was two-fold: first, I directly compared

learning between interpolated and continuous lectures, and second, I examined self-

testing along the dimensions of ICAP theory by contrasting expected cognitive outcomes

to a novel learning intervention. I expected that, in line with distributed practice

literature, the interpolated lecture would provide better scores across all of the dependent

variables. Instead, interpolated lectures were only beneficial toward notetaking and

revision. This suggests that interpolated lectures may affect organizational processing at

the time of learning. Self-testing did not appear to benefit final free recall as a matter of

quantity or protection against proactive interference. However, in congruence with my

prediction, the novel manipulation, note revision for others, yielded a clear benefit for

conceptual integration compared to restudy. The fact that this experiment not replicate

testing effects for conceptual integration challenges the constructive claims surrounding

retrieval-based learning (Carpenter, Pashler, & Vul, 2006; Zaromb & Roediger, 2010).

Next, each of these findings is addressed in more detail.

Interpolation

Under many circumstances, learning in a segmented manner benefits retention

and comprehension (Bachhel & Thaman, 2014; Mayer & Pilegard, 2014; Ruhl, Hughes,

& Schloss, 1987; Wissman et al., 2011; Yang & Shanks, 2017). Of interest in the current

study was whether a direct comparison between an interpolated and continuous lecture, in

combination with self-testing, note revision, and restudy activities, would still yield these

Texas Tech University, Eevin Jennings, August 2018

43

results. Since previous research has concluded mixed results on interpolation’s efficacy,

the new interpolation research from Jing et al. (2016) and Luo et al. (2016) led me to

predict that I would also find a benefit for interpolation. Instead, this prediction

(hypothesis 1) was only partially supported.

In this experimental paradigm, whether the interpolated lecture was effective was

contingent upon the corresponding dependent measure. In line with hypothesis 1,

interpolation was beneficial in that learners transcribed higher rates of favorable

notetaking characteristics (note quantity, preservation across the lecture, and number/type

of note revisions added), supporting sub-hypotheses 1.1-1.4. Although I did not directly

measure mind-wandering in this study, Jing et al. (2016) qualified attention to lecture by

the number of notes transcribed, and concluded that since interpolated self-testers

transcribed more notes than those who restudied, interpolated testing motivated learners

to stay on task. In contrast to hypothesis 2.1 and 2.2, there were no differences between

activity types in the number of notes transcribed or temporal distribution for notes.

Therefore, interpolation may increase attention to lecture and notetaking, regardless of

the type of activity in which participants partake.

Second, in contrast to hypothesis 1, there were very few elaborative note

revisions made between both note revision groups. One explanation for this finding could

be due to task demands. Specifically, note-revisers in the current experiment were

explicitly encouraged to revise their notes so that another participant could study and

benefit from them. In Luo et al.’s (2016) experiments, the focus for revisers was on

making revisions that were both elaborative and kept for themselves instead of shared.

Texas Tech University, Eevin Jennings, August 2018

44

Elaborative revisions might not have been a priority for the participants due to low

external utility; specifically, participants may have surmised that elaborative revisions

would be unhelpful to their peers. This makes sense, since similar mechanisms

(elaborative retrieval, self-explanation) are most beneficial for learners when tied to

exclusive experiences, therefore rendering the revisions fairly useless to a peer. It is

possible that explicit emphasis on elaborative revisions could increase their prevalence in

the current paradigm. Because elaborative inference/explanations are effective for

constructive processing, I would expect these types of revisions to further enhance

subsequent integration performance.

A novel takeaway from note revision type centers on the fact that when

interpolated, more lecture content was added to note revisions. Revision type is a new

measure, and as such, has only Luo et al.’s (2016) study as a comparison. Why did

interpolation elicit these different types of revisions? According to Jing et al. (2016),

interpolation reduced proactive interference, which therefore allowed learners to

consolidate information during segments. In the current experiment, it is possible that

participants added more lecture content during interpolated revision periods because

unrecorded information had not yet been overwritten by incoming content (similar to the

retrieval mechanisms that theoretically underlie interpolated testing). In contrast, an

uninterrupted, 12-minute, post-lecture revision session may have reflected detriments

from retroactive interference, which left participants with little memory of what to add to

their notes when given time to do so. This finding is a transformation of similar results

Texas Tech University, Eevin Jennings, August 2018

45

from Di Vesta and Smith (1979), where group discussions (a hallmark of interactive

engagement) were only beneficial when interspersed throughout the lecture.

Although not significant after Bonferroni corrections (p = .04), the interpolated

note revision condition did add more visual depictions to the notes. The six 2-minute

revision sessions may have been a timeframe long enough for participants to retrieve

recently-learned (but previously unrecorded) lecture data, but short enough that revisions

that would have otherwise been verbose reiterations were converted to time-conscious

visual representations. This implication is addressed again in the activity section of the

discussion. Regardless, this interpretation should be taken lightly since the effect was

nullified after applying post-hoc corrections.

The effects of interpolation did not transfer to the criterion tests, disconfirming

sub-hypotheses 1.5-1.9. One reason for this result may stem from experimental choices in

delay. In the interpolated testing literature, and in Di Vesta and Smith’s (1979) and Luo

et al.’s (2016) paradigms, “delay” was qualified with a 5-minute distractor task. Since I

were interested in whether these effects were robust to decay, I tested participants’

memories after 24 hours had passed. While it is possible that interpolation never had any

desirable effect on learners’ memories aside from boosting note quantity and quality, an

alternative (and perhaps more plausible) explanation is that interpolation may only yield

benefits within a shorter delay interval. Across seven distinct experiments, Wissman and

Rawson (2015) found consistent, large effects for interpolation when assessments were

conducted immediately following learning, but not after a 20-minute delay. The authors

Texas Tech University, Eevin Jennings, August 2018

46

concluded that these assessments unveiled an “initially sizable but subsequently fragile

effect… for which an explanation remains elusive” (pp. 452).

In a similar vein, delay may not be the only influence on interpolation. Recently,

two different studies investigating interpolation came to different conclusions. Healy,

Jones, Lalchandani, and Tack (2017) demonstrated that participants retained substantially

more information when they learned a text in an interpolated fashion, even when the final

test interval was expanded to one week. Conversely, Uner and Roediger III (2017)

resolved that interpolation did not matter; rather, final test scores were enhanced when

participants were tested at all on day 1, resulting in a tied benefit for interpolated and

post-lecture retrieval compared to restudying.

All three of the aforementioned studies incorporated different activities during the

learning trials as well as variations in assessment methods, which suggests that benefits

from interpolation may be measurable on criterion tests, but that this depends on the

engagement method to which participants are assigned. This question was assessed with

my novel manipulation, note revision for others, which I turn to next.

Self-Testing

The second key component of the dissertation served to examine self-testing

along the ICAP infrastructure. Testing, interpolated or otherwise, is postulated to encode

and strengthen memory traces for retrieved information, thereby reducing decay

(Roediger III, Gallo, & Geraci, 2002). Similarly, testing frequently throughout a lecture

induces test-potentiated learning, or an advantage for not only gaining but additionally

retaining more information as a function of multiple retrieval attempts (Arnold &

Texas Tech University, Eevin Jennings, August 2018

47

McDermott, 2013; Chan, McDermott, & Roediger III, 2006). When testing is facilitated

with meaningful generations (in contrast to rote retrieval), learners’ mental constructs and

schemas are both more deeply entrenched but also gain interconnectivity (Agarwal, 2011;

Blunt & Karpicke, 2014; Carpenter et al., 2006). These conclusions suggest that retrieval

should fall under the active or constructive ICAP encoding mechanisms. Therefore, I

expected to replicate the interpolated testing effect for free recall. However, after a 24-

hour delay, there were no observable benefits for the self-test groups, which disconfirmed

hypothesis 4.

Hypothesis 5 forecasted a testing effect. Specifically, I predicted that retrieval’s

encoding and “insulation” effects (Szpunar et al., 2008), particularly when interpolated

(Jing et al., 2016), would protect learning from proactive interference relative to restudy

and note revision, resulting in higher recall distribution from the middle and end of the

lecture. Instead, all groups’ recalls were largely allocated to the beginning of the lecture,

with significantly less information retained for the middle and especially the end, in

congruence with proactive interference theory (Postman & Underwood, 1973; Wixted,

2004). There was no benefit for testing, interpolated or otherwise. In sum, hypothesis 5

was not supported.

There are several potential explanations for this result. The first is that rather than

creating constructive recalls, participants who self-tested utilized a passive mode of

engagement where facts were recalled, but not manipulated. Chi and Wylie (2014)

illustrated that because the of nature retrieval instructions mandates minimal processing

(i.e., “recall as much information as you can remember”), participants may not extend

Texas Tech University, Eevin Jennings, August 2018

48

effort beyond what is required for the task. Similarly, Chan (2009) found that explicit

instructions to integrate (connect) information during retrieval significantly improved

retrieval. Since initial retrieval performance is also a significant predictor of final test

performance (Van Gog & Sweller, 2015), instructions to recall may, under some

circumstances, only require passive engagement rather than active or constructive.

In line with instructional importance, test expectancy may play a large role in

retrieval outcomes. While the expectancy of an upcoming test motivated students to

attend to lecture (Fitch, Drucker, & Norton Jr, 1951), students who were not informed of

an upcoming test did not benefit from testing nearly as much as those who were informed

(Szpunar et al., 2007; Weinstein et al., 2014). Thus, it appears that both the paradigm’s

constructions and learners’ expectations may act as moderators on whether retrieval is

useful.

Second, retrieval’s boundary conditions may have been breached in the current

experiment. While retrieval is consistently effective with retention for “simple” materials

(i.e., paired associates and short texts), the advantages wane as material complexity and

elemental interconnectedness increase (Van Gog & Sweller, 2015). Sweller (2010)

demonstrated that due to low conceptual relatedness, simple materials necessitate isolate,

passive engagement. Authentic educational materials, such as science texts or lectures,

frequently maximize working memory use for many students (Aiken et al., 1975; Bui &

Myerson, 2014; Piolat et al., 2005). This is postulated to counteract retrieval’s benefits

due to an overabundance but lack of coherence between cues in working memory,

metacognitive naivety, processing of multiple interacting concepts simultaneously,

Texas Tech University, Eevin Jennings, August 2018

49

negligible background knowledge, and necessity to differentiate between these

interconnected concepts (Wooldridge et al., 2014).

ICAP proposes that although engagements modes are assumed based on overt

behaviors (i.e., restudying notes), whether learners are actually employing such processes

must be inferred with suitable assessments. Subsequently, three possible explanations for

an absence of a testing effect arise: 1) those who restudied or revised their notes may

have covertly also engaged in retrieval or another active mode; 2) note revision may have

acted as a form of retrieval, and/or 3) self-testing did not elicit its supposed

active/constructive knowledge-change processes. The fact that a majority of interpolated

note revisers’ modifications comprised previously absent lecture information suggests

that in line with others’ theses (Luo et al., 2016; Williams & Eggert, 2002), note revision

may have indeed summoned retrieval. Finally, retrieval is most effective when feedback

is given immediately following recall (Butler & Roediger, 2008). In the current study, I

did not provide explicit feedback after retrieval trials on day 1. Arguments for potential

bifurcation effects (i.e., that memory traces are strengthened only for recalled items when

feedback is not provided) (Kornell, Bjork, & Garcia, 2011) could theoretically apply to

those who tested after the lecture; however, participants in the interpolated self-testing

condition were re-exposed to their notes after each retrieval trial and still did not recall

more information at final test than those who recalled after the lecture.

Note Revision for Others

Turning to the novel manipulation, I wanted to combine teaching expectancy and

note revision effects to create a simple yet comparable alternative to self-testing while

Texas Tech University, Eevin Jennings, August 2018

50

also assessing integration. I expected that the nature of note revision in respect of others’

learning would make salient points of incoherence and/or importance, therefore covertly

encouraging learners to attend to the content’s interrelatedness (an active form of

engagement). To briefly reiterate, individual note revision was not used because A) the

only study to compare outcomes with individual note revision found that it was most

effective when combined with a partner (Luo et al., 2016) and B) students’ perceptions of

“note revision” result in ineffective note recopying during revision periods (Aharony,

2006; Jairam & Kiewra, 2010). Similar to claims from Tran et al. (2015) and Agarwal

(2011), and in kind with transfer-appropriate processing theory (Rowland, 2014;

Winstanley, 1996), note revision was expected to result in equal to inferior free recall

performance compared to self-testing, but better performance on relational and

integrative responses.

There was a marginal advantage for self-testing (p = .09) when answering target

cued recall questions, therefore disconfirming hypothesis 6.1. This result was not

expected, but is unsurprising given the basic retrieval nature of the task and the groups’

resulting free recall performances. Asking participants to elaborate on the topic (i.e., “left

hemisphere”) essentially evoked rote reiteration. Also unexpected was the result in which

note revisers performed similarly to the self-test and restudy conditions on same-segment

elaborations, disconfirming hypothesis 6.2. This finding could be explained by the nature

of the instructions and lack of re-exposure to lecture cues at final test. In Jing et al.’s

(2016) study, participants were shown Power Point slides from the lecture and asked to

elaborate on how the slide related to the rest of the lecture. Because the Power Points

Texas Tech University, Eevin Jennings, August 2018

51

likely contained a plethora of additional cues (which were not specified in the article), the

interpolated testers made significantly more same-segment relational statements than the

interpolated restudy group. The fact that my prompt did not provide additional materials

aside from instructions to elaborate upon the topic statement (i.e., “left hemisphere”)

suggests that without explicit cues to guide topic elaboration, learners may not have

considered including temporally “local” information relative to the target concept.

Of critical importance was the finding that note-revisers made significantly more

cross-segmental integration statements than those who restudied, thus partially

confirming hypothesis 7. Although note revisers made more integrative statements than

self-testers, this difference was not significant. This result suggests that, relative to

restudy and in part to self-testing, note revision for others may help learners to grasp the

conceptual interconnectivity in the lecture. In turn, a focal point to consider is whether

the type of note revisions added affects learning.

Whereas the addition of lecture-based revisions may have influenced note-

revisers’ free and cued recalls, the addition of visual diagrams, especially in the

interpolated condition, may have acted as an active or constructive engagement method,

which was then observable with integration scores. The literature on note revisions is

scant; however, research on the role of generative processing (Hora, 2015; Menekse et

al., 2013), visual diagramming (Chiou, 2008), and editing essays for coherence (Cho &

Cho, 2011; Lundstrom & Baker, 2009) suggests that participants may have indirectly

benefitted from creating these revisions for others. Indeed, Schmeck, Mayer, Opfermann,

Pfeiffer, and Leutner (2014) found that learner-generated drawings created during

Texas Tech University, Eevin Jennings, August 2018

52

learning significantly benefitted comprehension compared to those who viewed an

author’s drawings (known as the generative drawing effect). Clearly, future research

should continue to inspect the effects of revision type in respect to conceptual integration.

Metacognition

Absolute accuracy. When students are accurate in judging what and how much

material they remember, they can then choose appropriate study strategies to

subsequently maintain and/or gain knowledge (Son & Metcalfe, 2000). This is achieved

through study allocation, and changes when learners are at their own leisure to study

versus when they are under time constraints (Nelson, Dunlosky, Graf, & Narens, 1994).

Frequently, however, learners retain very little and make JOLs that are far above their

resulting performance (Miller & Geraci, 2011), a phenomenon known as “unskilled and

unaware” (Kruger & Dunning, 1999).

While self-testing did result in better absolute accuracy than restudy and note

revision, these differences were not significant, which disconfirmed hypothesis 8.1. In

addition, I also expected that the note revision conditions’ absolute accuracy would fall

between self-testing and restudy. This was hypothesized because while studying notes

inflates overconfidence, the revisions added were suspected to act as a form of retrieval,

which presumably counteracted some amount of overconfidence. Although the

differences were not significant, the note revision groups showed the highest absolute

accuracy (43%) compared to restudy (42%) and self-testing (37%), disconfirming

hypothesis 8.2. Possibly, the presence of notes with note revisions inflated note revisers’

judgments more than studying the notes in general. The role of notetaking and revision in

Texas Tech University, Eevin Jennings, August 2018

53

metacognition has very few references, but Peverly et al. (2003) did find that note taking

did not predict self-regulatory behaviors.

JOLs. Although there are several remedies available to decrease overconfidence,

self-testing (interpolated or post-lecture) typically results in less overconfidence

compared to restudy (Karpicke, 2009; Szpunar et al., 2014). I replicated an advantage for

self-testing, but only in comparison to note revision. There was no difference between

note revision and restudy, nor restudy and self-testing, which persisted after the 24-hour

delay as well. These results partially confirmed hypotheses 9.1 and 9.2 in that self-testing

did reduce overconfidence both immediately and at a delay, but partially disconfirmed

these hypotheses in that these differences were not in contrast to the restudy groups.

Why did Szpunar et al. (2014) demonstrate an effect for interpolated testing for

overconfidence? New research by Yang, Potts, and Shanks (2017) demonstrated that

interpolated testing increased metamemory monitoring. When tested throughout learning,

participants spent more time studying lists compared to those who engaged in a different

task, which produced better final paired-associates test performance. Szpunar et al.

(2014) found similar results when cued-recall questions were used throughout an

interpolated lecture. However, in both of these paradigms, the same questions

implemented during learning trials were used at criterion. It may be possible that self-

testing during longer, complex, highly interconnected lectures may not only result in

decreased retrieval efficacy for learning, but also stifle metacognitive awareness. In

subsequent experiments, a worthwhile endeavor would be to monitor the amount of time

Texas Tech University, Eevin Jennings, August 2018

54

learners chose to spend during learning trials, as this may lead to more insight regarding

self-regulated study and retrieval.

Texas Tech University, Eevin Jennings, August 2018

55

Summary of Hypotheses and Outcomes

Table 5. The dissertation’s hypotheses and resulting outcomes.

Hypothesis Supported/Not Supported

1. Interpolation will result in better performance on all

of the DVs.

1.1 Note quantity………………………………….

1.2 Note temporal distribution……………………

1.3 Note revision quantity………………………...

1.4 Note revision type

-Elaborative………………………………...

-Proof-reading……………………………...

-Lecture-based……………………………..

-Visual……………………………………..

1.5 Free recall quantity…………………………...

1.6 Free recall temporal distribution……………...

1.7 Cued recall target performance……………….

1.8 Same-segment elaborations…………………..

1.9 Integration…………………………………….

Supported

Supported

Supported

Not supported

Not supported

Supported

Not supported

Not supported

Not supported

Not supported

Not supported

Not supported

2.1 Interpolated note revision will take more notes than

self-test and restudy…………………………………..

2.2 Interpolated self-test will have more notes than

restudy (replication of Jing et al. (2016))…….

Not supported

Not supported

3. Interpolated note revision will take more notes than

self-test and restudy in the middle and end portions of

the lecture (temporal distribution)……………………

Not supported

4. Self-test groups will perform better than note revision

and restudy groups on free recall………………..........

Not supported

Texas Tech University, Eevin Jennings, August 2018

56

5. Self-test groups will recall more information from the

middle and end of the lecture than note revision and

restudy groups (temporal distribution)……………….

Not supported

6.1 Note revision groups will perform best on cued

recall target…………………………………………...

6.2 Note revision groups will make more same-

segment elaborations……………………….

Not supported

Not supported

7. Note revision groups will perform best on integration.

Partially supported

8.1 Self-testers will have the best absolute accuracy……

8.2 Note revision groups’ absolute accuracy will

fall between self-testing and restudy………...

Not supported

Not supported

9.1 Self-testers will be the least overconfident

immediately after the lecture…………………………

9.2 Self-testers will be the least overconfident

after a 24-hour delay………………………….

Partially supported

Partially supported

Limitations

There were several limitations to the study. First, for note revision for others, I could not

determine whether it was the act of note revision or the presumption that notes would be

provided to a peer that drove any of the results (or lack thereof in some of the criterion

tests). A new experiment in which learners revise for themselves would be a simple next

step to clarify these results. In addition, most of the effect sizes were below .1. Power

analyses indicated that 25-30 participants would be necessary for the current experiment,

so the fact that effect sizes were so low suggests that additional factors may have

Texas Tech University, Eevin Jennings, August 2018

57

influenced the outcomes, such as the positively skewed free recall distribution. Other

possible factors could be due to individual characteristics (motivation, difficulty level,

working memory capacity), experimental methods, underrated self-reported prior

knowledge, etc. The manipulations may be helpful, but to what extent and under what

parameters is still unknown.

Future Directions

When more notes are available for review, learners demonstrate benefits on

criterion tests (Bui, Myerson, & Hale, 2013). A critical next step is to examine the

efficacy of the external storage function of notes; that is, based on how interpolated

lecture vastly improved note quantity, temporal distribution, and the types of note

revisions added, it is imperative to investigate whether these notes and revisions affect

learning when participants are given a chance to study them after a delay. Since students

allocate a majority of their study time to the night prior to and the morning of an exam

(Taraban, Maki, & Rynearson, 1999), giving participants a chance to study their notes

before the final test (rather than taking notes on day 1 and testing on day 2) would better

represent applied learning strategies and highlight the importance of note characteristics

in assessment. In essence, testing for external storage effects could reveal significant

performance benefits related to interpolation and note-revision manipulations.

Another possibility for the future related to combining note revision with self-

testing. Perhaps alternating between the two during interpolated lectures may bring out an

additive benefit, where learners are able to retrieve, restudy, and revise their notes. Since

note revision made significant strides on integrative statements in the current experiment,

Texas Tech University, Eevin Jennings, August 2018

58

combining it with self-testing (in a paradigm in which testing effects prevail) may

provide learners the best of both worlds; enhanced retention, integrative processing, and

an enriched set of notes to study later.

Regarding the theoretical basis for this dissertation, ICAP and other depth-of-

processing theories should continue to be explored as both the educational environment

and its pupils continue to evolve online. While retrieval is in some circumstances hailed

as integrative learning tool (Carpenter et al., 2006), in others it provides little beyond

verbatim storage (Bruchok et al., 2016). Examining the degree to which learning

activities match their hypothesized engagement modes, and ensuring that they are

assessed in kind, will help instructors and learners create more accurate learning

environments. For example, in an introductory psychology course, a major focus is

concept introduction and clarification, which contrasts the elaborative and constructive

approaches from senior-level seminars. Instructors teaching introductory courses may

focus on implementing active engagement and measure it in kind (i.e., cued recall),

whereas the senior courses may utilize constructive/interactive processes, which could

then be measured by advanced assessments (i.e., research proposals, product creation, or

transfer).

Finally, given the current grounding of the present study in the available

literature, it is curious why few of the predictions were met. One strong possibility is that

prior research has involved simple materials, like paired-associates, and not the more

demanding, and indeed, more typical, academic materials employed in this dissertation.

Another likely cause is that much of the prior research tested individuals on the same day

Texas Tech University, Eevin Jennings, August 2018

59

as study. Although prior research did implement distractor tasks within same-day testing,

the tests may have nonetheless been more representative of working rather than long-term

memory. Finally, all participants were prescribed to engage in their prescribed activity

for the same amount of time. Constraints from time-on-task effects may have contributed

to difficulties in detecting differences in the manipulations (Goldhammer et al., 2014).

Overall, it is imperative to replicate the current design using same-day (5 minute delay)

criterion tests which could inform the issue of the fragility (transience) of the encoding

benefits of the tested manipulations.

Texas Tech University, Eevin Jennings, August 2018

60

CHAPTER VI

CONCLUSION

Participants took significantly more notes during interpolated vs continuous

lecture learning, regardless of specific learning activity. Interpolated vs continuous

learning also resulted in more notes for the middle and final portions of the lecture (no

differences for beginning). Together these results indicate that interpolated learning

better prepares students for more effective study and more successful test taking in

typical learning settings. Because note revision was considered an active (or potentially

constructive) learning activity within the ICAP framework, it was of interest to know

whether note revision would be amplified within interpolated vs continuous learning.

Indeed, interpolated vs continuous learners made significantly more note revisions

(specifically, lecture-based and visual). Taken together, these two results represent the

major findings of this dissertation: interpolated lecture learning results in significantly

more lecture notes that are more evenly representative of the lecture content, and when

combined with a note revision activity, results in additional lecture content and visual

representations.

A major interest in this dissertation concerned the performance benefits of

revising notes for others. Participants who revised their notes were able to make

significantly more cross-lecture integration statements than the self-test or restudy

conditions, suggesting that note revision for others may allow learners to grasp how

temporally distant concepts are related to one another. A beneficial movement for

instructors may be to interpolate lectures with note revision periods.

Texas Tech University, Eevin Jennings, August 2018

61

There were no significant encoding benefits in the free-recall, and a marginal

advantage of the self-test versus note revision activities in the cued-recall data after a

one-day delay. Encoding effects relate to enhanced memory for lecture material due to

notetaking, which may have appeared on a shorter delay. The benefits of interpolated

notetaking and revising notes for others may have appeared had participants been tested

after studying their notes at a delay. Future work should investigate the notes’ external

storage function as a product of interpolated lectures and note revision for others.

Finally, prior research, which had tested participants on the same day as study,

may be more representative of working-memory benefits and not the long-term memory

benefits tested in this dissertation. It would be important to test this possibility with the

present experimental manipulations in order to better understand the immediate and

longer-term effects of the active, elaborative, and integrative learning processes.

Texas Tech University, Eevin Jennings, August 2018

62

REFERENCES

Agarwal, P. K. (2011). Examining the relationship between fact learning and higher order

learning via retrieval practice. Retrieved from

http://openscholarship.wustl.edu/etd/546

Agarwal, P. K., Karpicke, J. D., Kang, S. H., Roediger, H. L., & McDermott, K. B.

(2008). Examining the testing effect with open-and closed-book tests. Applied

Cognitive Psychology, 22(7), 861-876.

Aharony, N. (2006). The use of deep and surface learning strategies among students

learning English as a foreign language in an Internet environment. British Journal

of Educational Psychology, 76(4), 851-866.

Aiken, E. G., Thomas, G. S., & Shennum, W. A. (1975). Memory for a lecture: Effects of

notes, lecture rate, and informational density. Journal of Educational Psychology,

67(3), 439-444.

Allen, G. A., Mahler, W. A., & Estes, W. (1969). Effects of recall tests on long-term

retention of paired associates. Journal of Verbal Learning and Verbal Behavior,

8(4), 463-470.

Aly, I. (2013). Performance in an online introductory course in a hybrid classroom

setting. The Canadian Journal of Higher Education, 43(2), 85-99.

Ameen, E. C., Guffey, D. M., & Jackson, C. (2002). Evidence of teaching anxiety among

accounting educators. Journal of Education for Business, 78(1), 16-22.

Anderson, L. W., Krathwohl, D. R., Airasian, P., Cruikshank, K., Mayer, R., Pintrich, P.,

. . . Wittrock, M. (2001). A taxonomy for learning, teaching and assessing: A

revision of Bloom’s taxonomy (Vol. 9). New York, NY: Longman.

Anderson, R. C. (1972). How to construct achievement tests to assess comprehension.

Review of Educational Research, 42(2), 145-170.

Angelo, T. A., & Cross, K. P. (1993). Classroom assessment techniques: A handbook for

college teachers (2nd ed.). San Francisco, CA: Jossey-Bass.

Armbruster, B. B. (2000). Taking notes from lectures. In R. F. Flippo & D. C. Caverly

(Eds.), Handbook of college reading and study strategy research (pp. 175-199).

Mahwah, NJ: Lawrence Erlbaum.

Arnold, K. M., & McDermott, K. B. (2013). Test-potentiated learning: Distinguishing

between direct and indirect effects of tests. Journal of Experimental Psychology:

Learning, Memory, and Cognition, 39(3), 940-945.

Texas Tech University, Eevin Jennings, August 2018

63

Auld Jr, F., & White, A. M. (1956). Rules for dividing interviews into sentences. The

Journal of Psychology, 42(2), 273-281.

Bachhel, R., & Thaman, R. G. (2014). Effective use of pause procedure to enhance

student engagement and learning. Journal of Clinical and Diagnostic Research,

8(8), 1-3.

Bargh, J. A., & Schul, Y. (1980). On the cognitive benefits of teaching. Journal of

Educational Psychology, 72(5), 593-604.

Barron, B. (2003). When smart groups fail. The Journal of the Learning Sciences, 12(3),

307-359.

Bartlett, F. (1958). Thinking: An experimental and social study. New York, NY: Basic

Books.

Bassili, J. N., & Joordens, S. (2008). Media player tool use, satisfaction with online

lectures and examination performance. International Journal of E-Learning &

Distance Education, 22(2), 93-108.

Bauer, A., & Koedinger, K. R. (2007). Selection-based note-taking applications. Paper

presented at the Proceedings of the SIGCHI Conference on Human Factors in

Computing Systems.

Beck, K. (2015). Note taking effectiveness in the modern classroom. The Compass, 1(1),

1-9.

Bellezza, F. S., Cheesman, F. L., & Reddy, B. G. (1977). Organization and semantic

elaboration in free recall. Journal of Experimental Psychology: Human Learning

and Memory, 3(5), 539-550.

Benton, S. L., Kiewra, K. A., Whitfill, J. M., & Dennison, R. (1993). Encoding and

external-storage effects on writing processes. Journal of Educational Psychology,

85(2), 267-280.

Benware, C. A., & Deci, E. L. (1984). Quality of learning with an active versus passive

motivational set. American Educational Research Journal, 21(4), 755-765.

Berninger, V. W., Vaughan, K. B., Abbott, R. D., Abbott, S. P., Rogan, L. W., Brooks,

A., . . . Graham, S. (1997). Treatment of handwriting problems in beginning

writers: Transfer from handwriting to composition. Journal of Educational

Psychology, 89(4), 652-666.

Bertou, P. D., Clasen, R. E., & Lambert, P. (1972). An analysis of the relative efficacy of

advanced organizers, post organizers, interspersed questions, and combinations

Texas Tech University, Eevin Jennings, August 2018

64

thereof in facilitating learning and retention from a televised lecture. The Journal

of Educational Research, 65(7), 329-333.

Bjork, E. L., & Bjork, R. A. (2011). Making things hard on yourself, but in a good way:

Creating desirable difficulties to enhance learning. In M. A. Gernsbacher (Ed.),

Psychology and the real world: Essays illustrating fundamental contributions to

society (pp. 56-64). New York, NY: Worth Publishers.

Blunt, J. R., & Karpicke, J. D. (2014). Learning with retrieval-based concept mapping.

Journal of Educational Psychology, 106(3), 849-858. doi:10.1037/a0035934

Bonner, J. M., & Holliday, W. G. (2006). How college science students engage in note‐

taking strategies. Journal of Research in Science Teaching, 43(8), 786-818.

Brazeau, G. A. (2006). Handouts in the classroom: Is note taking a lost skill? American

Journal of Pharmaceutical Education, 70(2), 1-2.

Breslow, L., Pritchard, D. E., DeBoer, J., Stump, G. S., Ho, A. D., & Seaton, D. T.

(2013). Studying learning in the worldwide classroom: Research into edX's first

MOOC. Research & Practice in Assessment, 8, 13-25.

Bretzing, B. H., & Kulhavy, R. W. (1979). Notetaking and depth of processing.

Contemporary Educational Psychology, 4(2), 145-153.

Bruchok, C., Mar, C., & Craig, S. D. (2016). Is free recall active: The testing effect

through the ICAP lens. Paper presented at the E-Learn: World Conference on E-

Learning in Corporate, Government, Healthcare, and Higher Education,

Chesapeake, VA.

Bruffee, K. A. (1999). Collaborative learning: Higher education, independence, and the

authority of knowledge (2nd ed.). Baltimore, MD: Johns Hopkins University

Press.

Bui, D. C., & Myerson, J. (2014). The role of working memory abilities in lecture note-

taking. Learning and Individual Differences, 33, 12-22.

Bui, D. C., Myerson, J., & Hale, S. (2013). Note-taking with computers: Exploring

alternative strategies for improved recall. Journal of Educational Psychology,

105(2), 299-309.

Bunce, D. M., Flens, E. A., & Neiles, K. Y. (2010). How long can students pay attention

in class? A study of student attention decline using clickers. Journal of Chemical

Education, 87(12), 1438-1443.

Texas Tech University, Eevin Jennings, August 2018

65

Butler, A., Phillmann, K.-B., & Smart, L. (2001). Active learning within a lecture:

Assessing the impact of short, in-class writing exercises. Teaching of Psychology,

28(4), 257-259.

Butler, A. C., Karpicke, J. D., & Roediger III, H. L. (2008). Correcting a metacognitive

error: Feedback increases retention of low-confidence correct responses. Journal

of Experimental Psychology: Learning, Memory, and Cognition, 34(4), 918.

Butler, A. C., & Roediger, H. L. (2008). Feedback enhances the positive effects and

reduces the negative effects of multiple-choice testing. Memory & Cognition,

36(3), 604-616.

Butler, A. C., & Roediger III, H. L. (2007). Testing improves long-term retention in a

simulated classroom setting. European Journal of Cognitive Psychology, 19(4-5),

514-527.

Carpenter, S. K. (2009). Cue strength as a moderator of the testing effect: The benefits of

elaborative retrieval. Journal of Experimental Psychology: Learning, Memory,

and Cognition, 35(6), 1563-1569.

Carpenter, S. K. (2011). Semantic information activated during retrieval contributes to

later retention: Support for the mediator effectiveness hypothesis of the testing

effect. Journal of Experimental Psychology: Learning, Memory, and Cognition,

37(6), 1-6.

Carpenter, S. K. (2012). Testing enhances the transfer of learning. Current Directions in

Psychological Science, 21(5), 279-283.

Carpenter, S. K., & DeLosh, E. L. (2006). Impoverished cue support enhances

subsequent retention: Support for the elaborative retrieval explanation of the

testing effect. Memory & Cognition, 34(2), 268-276.

Carpenter, S. K., & Kelly, J. W. (2012). Tests enhance retention and transfer of spatial

learning. Psychonomic Bulletin & Review, 19(3), 443-448.

Carpenter, S. K., & Pashler, H. (2007). Testing beyond words: Using tests to enhance

visuospatial map learning. Psychonomic Bulletin & Review, 14(3), 474-478.

Carpenter, S. K., Pashler, H., & Vul, E. (2006). What types of learning are enhanced by a

cued recall test? Psychonomic Bulletin & Review, 13(5), 826-830.

Carpenter, S. K., Pashler, H., Wixted, J. T., & Vul, E. (2008). The effects of tests on

learning and forgetting. Memory & Cognition, 36(2), 438-448.

Carter, J. F., & Van Matre, N. H. (1975). Note taking versus note having. Journal of

Educational Psychology, 67(6), 900-904.

Texas Tech University, Eevin Jennings, August 2018

66

Cepeda, N. J., Pashler, H., Vul, E., Wixted, J. T., & Rohrer, D. (2006). Distributed

practice in verbal recall tasks: A review and quantitative synthesis. Psychological

Bulletin, 132(3), 354-380.

Chan, J. C. (2009). When does retrieval induce forgetting and when does it induce

facilitation? Implications for retrieval inhibition, testing effect, and text

processing. Journal of Memory and Language, 61(2), 153-170.

Chan, J. C., McDermott, K. B., & Roediger III, H. L. (2006). Retrieval-induced

facilitation: Initially nontested material can benefit from prior testing of related

material. Journal of Experimental Psychology: General, 135(4), 553-571.

Chi, M. T. (2009). Active‐constructive‐interactive: A conceptual framework for

differentiating learning activities. Topics in Cognitive Science, 1(1), 73-105.

doi:10.1111/j.1756-8765.2008.01005.x

Chi, M. T., & Menekse, M. (2015). Dialogue patterns in peer collaboration that promote

learning. In L. Resnick, C. Asterhan, & S. Clarke (Eds.), Socializing intelligence

through academic talk and dialogue (pp. 263-274). Washington, DC: American

Educational Research Association.

Chi, M. T., & Wylie, R. (2014). The ICAP framework: Linking cognitive engagement to

active learning outcomes. Educational Psychologist, 49(4), 219-243.

doi:10.1080/00461520.2014.965823

Chiou, C. C. (2008). The effect of concept mapping on students’ learning achievements

and interests. Innovations in Education and Teaching International, 45(4), 375-

387.

Cho, Y. H., & Cho, K. (2011). Peer reviewers learn from giving comments. Instructional

Science, 39(5), 629-643. doi:10.1007/s11251-010-9146-1

Clark, R. C., & Mayer, R. E. (2010). Applying the segmenting and pretraining principles:

Managing complexity by breaking a lesson into parts. In e-Learning and the

Science of Instruction: Proven Guidelines for Consumers and Designers of

Multimedia Learning (3rd ed., pp. 204-220). Hoboken, US: Pfeiffer.

Coats, H. (2016). A study on the effect of lecture length in the flipped classroom. (Ph.D

Dissertation), Texas Tech University,

Comer, D. K., Clark, C. R., & Canelas, D. A. (2014). Writing to learn and learning to

write across the disciplines: Peer-to-peer writing in introductory-level MOOCs.

The International Review of Research in Open and Distributed Learning, 15(5),

1-57.

Texas Tech University, Eevin Jennings, August 2018

67

Conati, C., & Carenini, G. (2001). Generating tailored examples to support learning via

self-explanation. Paper presented at the International Joint Conference on

Artificial Intelligence.

Congleton, A., & Rajaram, S. (2012). The origin of the interaction between learning

method and delay in the testing effect: The roles of processing and conceptual

retrieval organization. Memory & Cognition, 40(4), 528-539.

Connelly, V., Dockrell, J. E., & Barnett, J. (2005). The slow handwriting of

undergraduate students constrains overall performance in exam essays.

Educational Psychology, 25(1), 99-107.

Copley, J. (2007). Audio and video podcasts of lectures for campus‐based students:

Production and evaluation of student use. Innovations in Education and Teaching

International, 44(4), 387-399.

Coppens, L. C., Verkoeijen, P. P., & Rikers, R. M. (2011). Learning Adinkra symbols:

The effect of testing. Journal of Cognitive Psychology, 23(3), 351-357.

Corey, S. M. (1934). Learning from lectures vs. learning from readings. Journal of

Educational Psychology, 25(6), 459-470.

Craik, F. I., & Lockhart, R. S. (1972). Levels of processing: A framework for memory

research. Journal of Verbal Learning and Verbal Behavior, 11(6), 671-684.

Craik, F. I., & Tulving, E. (1975). Depth of processing and the retention of words in

episodic memory. Journal of Experimental Psychology: General, 104(3), 268-

294.

Cranney, J., Ahn, M., McKinnon, R., Morris, S., & Watts, K. (2009). The testing effect,

collaborative learning, and retrieval-induced facilitation in a classroom setting.

European Journal of Cognitive Psychology, 21(6), 919-940.

Cull, W. L. (2000). Untangling the benefits of multiple study opportunities and repeated

testing for cued recall. Applied Cognitive Psychology, 14(3), 215-235.

Czerniak, C. M., & Haney, J. J. (1998). The effect of collaborative concept mapping on

elementary preservice teachers' anxiety, efficacy, and achievement in physical

science. Journal of Science Teacher Education, 9(4), 303-320.

Dancy, R. M. (1991). Two studies in the early Academy. Albany, NY: SUNY Press.

Daou, M., Buchanan, T. L., Lindsey, K. R., Lohse, K. R., & Miller, M. W. (2016).

Expecting to teach enhances learning: Evidence from a motor learning paradigm.

Journal of Motor Learning and Development, 4(2), 197-207.

Texas Tech University, Eevin Jennings, August 2018

68

Daou, M., Lohse, K. R., & Miller, M. W. (2016). Expecting to teach enhances motor

learning and information processing during practice. Human Movement Science,

49, 336-345.

DeLozier, S. J., & Rhodes, M. G. (2017). Flipped classrooms: A review of key ideas and

recommendations for practice. Educational Psychology Review, 29(1), 141-151.

Dempster, F. N. (1996). Distributing and managing the conditions of encoding and

practice. Memory, 10, 317-344.

Di Vesta, F. J., & Gray, G. S. (1972). Listening and note taking. Journal of Educational

Psychology, 63(1), 8-14.

Di Vesta, F. J., & Gray, G. S. (1973). Listening and note taking: II. Immediate and

delayed recall as functions of variations in thematic continuity, note taking, and

length of listening-review intervals. Journal of Educational Psychology, 64(3),

278-287.

Di Vesta, F. J., & Smith, D. A. (1979). The pausing principle: Increasing the efficiency of

memory for ongoing events. Contemporary Educational Psychology, 4(3), 288-

296.

Dornisch, M., Sperling, R. A., & Zeruth, J. A. (2011). The effects of levels of elaboration

on learners’ strategic processing of text. Instructional Science, 39(1), 1-26.

Doymus, K. (2008). Teaching chemical equilibrium with the jigsaw technique. Research

in Science Education, 38(2), 249-260. doi:10.1007/s11165-007-9047-8

Dunlosky, J., Rawson, K. A., Marsh, E. J., Nathan, M. J., & Willingham, D. T. (2013).

Improving students’ learning with effective learning techniques: Promising

directions from cognitive and educational psychology. Psychological Science in

the Public Interest, 14(1), 4-58.

Eddy, S. L., Brownell, S. E., Thummaphan, P., Lan, M.-C., & Wenderoth, M. P. (2015).

Caution, student experience may vary: Social identities impact a student’s

experience in peer discussions. CBE- Life Sciences Education, 14(4), 1-17.

doi:10.1187/cbe.15-05-0108

Eglington, L. G., & Kang, S. H. (2016). Retrieval practice benefits deductive inference.

Educational Psychology Review, 1-14. doi:10.1007/s10648-016-9386-y

Ehly, S., Keith, T. Z., & Bratton, B. (1987). The benefits of tutoring: An exploration of

expectancy and outcomes. Contemporary Educational Psychology, 12(2), 131-

134.

Texas Tech University, Eevin Jennings, August 2018

69

Entwistle, N., McCune, V., & Hounsell, J. (2002). Approaches to studying and

perceptions of university teaching-learning environments: Concepts, measures

and preliminary findings. Occasional Report, 1, 1-19.

Evans, D. J., & Cuffe, T. (2009). Near‐peer teaching in anatomy: An approach for deeper

learning. Anatomical Sciences Education, 2(5), 227-233.

Fink III, J. L. (2010). Why we banned use of laptops and “scribe notes” in our classroom.

American Journal of Pharmaceutical Education, 74(6), 114.

Fiorella, L., & Mayer, R. E. (2013). The relative benefits of learning by teaching and

teaching expectancy. Contemporary Educational Psychology, 38(4), 281-288.

Fiorella, L., & Mayer, R. E. (2014). Role of expectations and explanations in learning by

teaching. Contemporary Educational Psychology, 39(2), 75-85.

Fiorella, L., & Mayer, R. E. (2015a). Eight ways to promote generative learning.

Educational Psychology Review, 1-25.

Fiorella, L., & Mayer, R. E. (2015b). Learning as a generative activity: Eight learning

strategies that promote understanding. New York, NY: Cambridge University

Press.

Fisher, J. L., & Harris, M. B. (1973). Effect of note taking and review on recall. Journal

of Educational Psychology, 65(3), 321-325.

Fitch, M. L., Drucker, A. J., & Norton Jr, J. (1951). Frequent testing as a motivating

factor in large lecture classes. Journal of Educational Psychology, 42(1), 1-20.

Florax, M., & Ploetzner, R. (2010). What contributes to the split-attention effect? The

role of text segmentation, picture labelling, and spatial proximity. Learning and

Instruction, 20(3), 216-224.

Foos, P. W., & Fisher, R. P. (1988). Using tests as learning opportunities. Journal of

Educational Psychology, 80(2), 179-183.

Forehand, M. (2010). Bloom's taxonomy: Original and revised. Emerging perspectives on

learning, teaching, and technology. Retrieved from

http://projects.coe.uga.edu/epltt/

Foulin, J. N. (1995). Pauses and fluency: Chronometrics indices of writing production.

L’Annee Psychologique, 95(3), 483-504.

Gardiner, F. M., Craik, F. I., & Bleasdale, F. A. (1973). Retrieval difficulty and

subsequent recall. Memory & Cognition, 1(3), 213-216.

Texas Tech University, Eevin Jennings, August 2018

70

Goldhammer, F., Naumann, J., Stelter, A., Toth, K., Rolke, H. & Kleime, E. (2014). The

time on task effect in reading and problem solving is moderated by task difficulty

and skill: Insights from a computer-based large-scale assessment. Journal of

Educational Psychology, 106(3), 608-626.

Goldman-Eisler, F. (1972). Pauses, clauses, sentences. Language and Speech, 15(2), 103-

113.

Gregory, A., Walker, I., Mclaughlin, K., & Peets, A. D. (2011). Both preparing to teach

and teaching positively impact learning outcomes for peer teachers. Medical

Teacher, 33(8), 417-422.

Haas, C. (1989). Does the medium make a difference? Two studies of writing with pen

and paper and with computers. Human-Computer Interaction, 4(2), 149-169.

Hartley, J. (1976). Lecture handouts and student note‐taking. Programmed Learning and

Educational Technology, 13(2), 58-64. doi:10.1080/1355800760130208

Hartley, J., & Davies, I. K. (1978). Note‐taking: A critical review. Programmed Learning

and Educational Technology, 15(3), 207-224.

Haynes, J. M., McCarley, N. G., & Williams, J. L. (2015). An analysis of notes taken

during and after a lecture presentation. North American Journal of Psychology,

17(1), 175-186.

Healy, A. F., Jones, M., Lalchandani, L. A., & Tack, L. A. (2017). Timing of quizzes

during learning: Effects on motivation and retention. Journal of Experimental

Psychology: Applied, 23(2), 128.

Hebb, D. O. (1966). Textbook of psychology. Philadelphia, PA: W. B. Saunders.

Hintzman, D. L., Block, R. A., & Summers, J. J. (1973). Modality tags and memory for

repetitions: Locus of the spacing effect. Journal of Verbal Learning and Verbal

Behavior, 12(2), 229-238.

Holen, M. C., & Oaster, T. R. (1976). Serial position and isolation effects in a classroom

lecture simulation. Journal of Educational Psychology, 68(3), 293-296.

doi:10.1037/0022-0663.68.3.293

Hoogerheide, V., Deijkers, L., Loyens, S. M., Heijltjes, A., & van Gog, T. (2016).

Gaining from explaining: Learning improves from explaining to fictitious others

on video, not from writing to them. Contemporary Educational Psychology, 44,

95-106.

Texas Tech University, Eevin Jennings, August 2018

71

Hora, M. T. (2015). Toward a descriptive science of teaching: How the TDOP

illuminates the multidimensional nature of active learning in postsecondary

classrooms. Science Education, 99(5), 783-818. doi:10.1002/sce.21175

Howe, M. J. (1970). Introduction to human memory: A psychological approach: New

York: Harper & Row.

Jairam, D., & Kiewra, K. A. (2010). Helping students soar to success on computers: An

investigation of the SOAR study method for computer-based learning. Journal of

Educational Psychology, 102(3), 601-614. doi:10.1037/a0019137

Jing, H. G., Szpunar, K. K., & Schacter, D. L. (2016). Interpolated testing influences

focused attention and improves integration of information during a video-

recorded lecture. Journal of Experimental Psychology: Applied, 22(3), 305-318.

Johnson, C. I., & Mayer, R. E. (2009). A testing effect with multimedia learning. Journal

of Educational Psychology, 101(3), 621-629.

Johnston, J. O., & Calhoun, J. A. P. (1969). The serial position effect in lecture material.

The Journal of Educational Research, 62(6), 251-258.

doi:10.1080/00220671.1969.10883835

Kane, M. J., & Engle, R. W. (2000). Working-memory capacity, proactive interference,

and divided attention: limits on long-term memory retrieval. Journal of

Experimental Psychology: Learning, Memory, and Cognition, 26(2), 336.

Kang, S. H., McDaniel, M. A., & Pashler, H. (2011). Effects of testing on learning of

functions. Psychonomic Bulletin & Review, 18(5), 998-1005.

Kang, S. H., McDermott, K. B., & Roediger III, H. L. (2007). Test format and corrective

feedback modify the effect of testing on long-term retention. European Journal of

Cognitive Psychology, 19(4-5), 528-558.

Karpicke, J. D. (2009). Metacognitive control and strategy selection: Deciding to practice

retrieval during learning. Journal of Experimental Psychology: General, 138(4),

469-486.

Karpicke, J. D., & Blunt, J. R. (2011). Retrieval practice produces more learning than

elaborative studying with concept mapping. Science, 331(6018), 772-775.

doi:10.1126/science.1199327

Karpicke, J. D., & Roediger, H. L. (2007). Repeated retrieval during learning is the key

to long-term retention. Journal of Memory and Language, 57(2), 151-162.

Karpicke, J. D., & Roediger, H. L. (2008). The critical importance of retrieval for

learning. Science, 319(5865), 966-968.

Texas Tech University, Eevin Jennings, August 2018

72

Katayama, A. D., & Robinson, D. H. (2000). Getting students “partially” involved in

note-taking using graphic organizers. The Journal of Experimental Education,

68(2), 119-133.

Kay, R. H., & Lauricella, S. (2014). Investigating the benefits and challenges of using

laptop computers in higher education classrooms. Canadian Journal of Learning

and Technology, 40(2), 1-25.

Ke, F. (2010). Examining online teaching, cognitive, and social presence for adult

students. Computers & Education, 55(2), 808-820.

Kiewra, K. A. (1985). Students' note-taking behaviors and the efficacy of providing the

instructor's notes for review. Contemporary Educational Psychology, 10(4), 378-

386. doi:10.1016/0361-476X(85)90034-7

Kiewra, K. A. (1989). A review of note-taking: The encoding-storage paradigm and

beyond. Educational Psychology Review, 1(2), 147-172. doi:

10.1007/BF01326640

Kiewra, K. A., Dubois, N. F., Christian, D., McShane, A., Meyerhoffer, M., & Roskelley,

D. (1991). Note-taking functions and techniques. Journal of Educational

Psychology, 83(2), 240-245. doi:10.1037/0022-0663.83.2.240

Kiewra, K. A., Mayer, R. E., Christensen, M., Kim, S.-I., & Risch, N. (1991). Effects of

repetition on recall and note-taking: Strategies for learning from lectures. Journal

of Educational Psychology, 83(1), 120-123.

Kim, K., Turner, S. A., & Pérez-Quiñones, M. A. (2009). Requirements for electronic

note taking systems: A field study of note taking in university classrooms.

Education and Information Technologies, 14(3), 255-283. doi:10.1007/s10639-

009-9086-z

King, A. (1989). Effects of self-questioning training on college students' comprehension

of lectures. Contemporary Educational Psychology, 14(4), 366-381.

King, A. (1991). Improving lecture comprehension: Effects of a metacognitive strategy.

Applied Cognitive Psychology, 5(4), 331-346.

Kintsch, W. (1974). The representation of meaning in memory. Hillsdale, N.J.: Lawrence

Erlbaum Associates.

Kintsch, W. (1988). The role of knowledge in discourse comprehension: A construction-

integration model. Psychological Review, 95(2), 163-182.

Kintsch, W. (1994). Text comprehension, memory, and learning. American Psychologist,

49(4), 294-303.

Texas Tech University, Eevin Jennings, August 2018

73

Knight, L. J., & McKelvie, S. J. (1986). Effects of attendance, note-taking, and review on

memory for a lecture: Encoding vs. external storage functions of notes. Canadian

Journal of Behavioural Science, 18(1), 52-61. doi:10.1037/h0079957

Kornell, N., Bjork, R. A., & Garcia, M. A. (2011). Why tests appear to prevent

forgetting: A distribution-based bifurcation model. Journal of Memory and

Language, 65(2), 85-97.

Krathwohl, D. R. (2002). A revision of Bloom's taxonomy: An overview. Theory Into

Practice, 41(4), 212-218.

Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: How difficulties in

recognizing one's own incompetence lead to inflated self-assessments. Journal of

Personality and Social Psychology, 77(6), 1121-1134.

Landauer, T. K. (2006). Latent semantic analysis. In Encyclopedia of Cognitive Science:

Wiley Online Library.

Lebauer, R. S. (1984). Using lecture transcripts in EAP lecture comprehension courses.

Teachers of English to Speakers of Other Languages Quarterly, 18(1), 41-54.

Leeming, F. C. (2002). The exam-a-day procedure improves performance in psychology

classes. Teaching of Psychology, 29(3), 210-212.

Lehman, M., Smith, M. A., & Karpicke, J. D. (2014). Toward an episodic context

account of retrieval-based learning: Dissociating retrieval practice and

elaboration. Journal of Experimental Psychology: Learning, Memory, and

Cognition, 40(6), 1787-1794.

Lockhart, R. S., & Craik, F. I. (1990). Levels of processing: A retrospective commentary

on a framework for memory research. Canadian Journal of Psychology, 44(1),

87-112.

Lundstrom, K., & Baker, W. (2009). To give is better than to receive: The benefits of

peer review to the reviewer's own writing. Journal of Second Language Writing,

18(1), 30-43. doi:10.1016/j.jslw.2008.06.002

Luo, L., Kiewra, K. A., & Samuelson, L. (2016). Revising lecture notes: how revision,

pauses, and partners affect note taking and achievement. Instructional Science,

44(1), 45-67.

Lusk, D. L., Evans, A. D., Jeffrey, T. R., Palmer, K. R., Wikstrom, C. S., & Doolittle, P.

E. (2009). Multimedia learning and individual differences: Mediating the effects

of working memory capacity with segmentation. British Journal of Educational

Technology, 40(4), 636-651.

Texas Tech University, Eevin Jennings, August 2018

74

Lyle, K. B., & Crawford, N. A. (2011). Retrieving essential material at the end of lectures

improves performance on statistics exams. Teaching of Psychology, 38(2), 94-97.

Lyons, A., Reysen, S., & Pierce, L. (2012). Video lecture format, student technological

efficacy, and social presence in online courses. Computers in Human Behavior,

28(1), 181-186.

Mannes, S. M., & Kintsch, W. (1987). Knowledge organization and text organization.

Cognition and Instruction, 4(2), 91-115.

Marsh, E. J., & Sink, H. E. (2010). Access to handouts of presentation slides during

lecture: Consequences for learning. Applied Cognitive Psychology, 24(5), 691-

706.

Masson, M. E., & McDaniel, M. A. (1981). The role of organizational processes in long-

term retention. Journal of Experimental Psychology: Human Learning and

Memory, 7(2), 100-110.

Mayer, R. (2005). Cognitive theory of multimedia learning. In R. Mayer (Ed.), The

Cambridge handbook of multimedia learning (pp. 31-48). New York, NY:

Cambridge University Press.

Mayer, R. E. (2002). Rote versus meaningful learning. Theory Into Practice, 41(4), 226-

232.

Mayer, R. E. (2008). Advances in applying the science of learning and instruction to

education. Psychological Science in the Public Interest, 9(3), i-ii.

Mayer, R. E. (2010). Learning with technology. The nature of learning: Using research

to inspire practice, 179-198.

Mayer, R. E., & Alexander, P. A. (2011). Handbook of research on learning and

instruction. New York, NY: Routledge.

Mayer, R. E., & Moreno, R. (2003). Nine ways to reduce cognitive load in multimedia

learning. Educational Psychologist, 38(1), 43-52.

Mayer, R. E., & Pilegard, C. (2014). Principles for managing essential processing in

multimedia learning: Segmenting, pre-training, and modality principles. In The

Cambridge handbook of Multimedia Learning (pp. 316): Cambridge University

Press.

Mayer, R. E., Stull, A., DeLeeuw, K., Almeroth, K., Bimber, B., Chun, D., . . . Zhang, H.

(2009). Clickers in college classrooms: Fostering learning with questioning

methods in large lecture classes. Contemporary Educational Psychology, 34(1),

51-57.

Texas Tech University, Eevin Jennings, August 2018

75

McAndrew, D. A. (1983). Underlining and notetaking: Some suggestions from research.

Journal of Reading, 27(2), 103-108.

McDaniel, M. A., Anderson, J. L., Derbish, M. H., & Morrisette, N. (2007). Testing the

testing effect in the classroom. European Journal of Cognitive Psychology, 19(4-

5), 494-513.

McDaniel, M. A., Roediger, H. L., & McDermott, K. B. (2007). Generalizing test-

enhanced learning from the laboratory to the classroom. Psychonomic Bulletin &

Review, 14(2), 200-206.

McEldoon, K. L., Durkin, K. L., & Rittle‐Johnson, B. (2013). Is self‐explanation worth

the time? A comparison to additional practice. British Journal of Educational

Psychology, 83(4), 615-632. doi:10.1111/j.2044-8279.2012.02083.x

McKeachie, W. J. (1987). Teaching and learning in the college classroom: A review of

the research literature. Ann Arbor, MI: The Regents of the University of

Michigan.

Menekse, M., Stump, G. S., Krause, S., & Chi, M. T. (2013). Differentiated overt

learning activities for effective instruction in engineering classrooms. Journal of

Engineering Education, 102(3), 346-374. doi:10.1002/jee.20021

Metcalfe, J. (2011). Desirable difficulties and studying in the Region of Proximal

Learning. In A. S. Benjamin (Ed.), Successful remembering and successful

forgetting: A Festschrift in honor of Robert A. Bjork (pp. 259-276). Hove, UK:

Psychology Press.

Meyer, B., Schermuly, C. C., & Kauffeld, S. (2016). That’s not my place: The interacting

effects of faultlines, subgroup size, and social competence on social loafing

behaviour in work groups. European Journal of Work and Organizational

Psychology, 25(1), 31-49. doi:10.1080/1359432X.2014.996554

Miller, T. M., & Geraci, L. (2011). Unskilled but aware: Reinterpreting overconfidence

in low-performing students. Journal of Experimental Psychology: Learning,

Memory, and Cognition, 37(2), 502-506. doi:10.1037/a0021802

Mintzes, J. J., Canas, A., Coffey, J., Gorman, J., Gurley, L., Hoffman, R., . . . Trifone, J.

(2011). Comment on “Retrieval practice produces more learning than elaborative

studying with concept mapping”. Science, 334(6055), 453-453.

Monroe, P. (1916). Universities. In A syllabus of the course of study in the history of

education (pp. 87). United States: Macmillan Company.

Texas Tech University, Eevin Jennings, August 2018

76

Morris, C. D., Bransford, J. D., & Franks, J. J. (1977). Levels of processing versus

transfer appropriate processing. Journal of Verbal Learning and Verbal Behavior,

16(5), 519-533.

Mueller, P. A., & Oppenheimer, D. M. (2014). The pen is mightier than the keyboard:

Advantages of longhand over laptop note taking. Psychological Science, 25(6), 1-

10.

Narloch, R., Garbin, C. P., & Turnage, K. D. (2006). Benefits of prelecture quizzes.

Teaching of Psychology, 33(2), 109-112.

Nelson, T. O., & Dunlosky, J. (1991). When people's judgments of learning (JOLs) are

extremely accurate at predicting subsequent recall: The “delayed-JOL effect”.

Psychological Science, 2(4), 267-271.

Nelson, T. O., Dunlosky, J., Graf, A., & Narens, L. (1994). Utilization of metacognitive

judgments in the allocation of study during multitrial learning. Psychological

Science, 5(4), 207-213.

Nestojko, J. F., Bui, D. C., Kornell, N., & Bjork, E. L. (2014). Expecting to teach

enhances learning and organization of knowledge in free recall of text passages.

Memory & Cognition, 42(7), 1038-1048.

Nunes, L. D., & Weinstein, Y. (2012). Testing improves true recall and protects against

the build-up of proactive interference without increasing false recall. Memory,

20(2), 138-154.

O'Flaherty, J., & Phillips, C. (2015). The use of flipped classrooms in higher education:

A scoping review. The Internet and Higher Education, 25, 85-95.

Oliver, C. E. (1990). A sensorimotor program for improving writing readiness skills in

elementary-age children. American Journal of Occupational Therapy, 44(2), 111-

116.

Olsen, L. A., & Huckin, T. H. (1990). Point-driven understanding in engineering lecture

comprehension. English for Specific Purposes, 9(1), 33-47.

Osborne, J. (2013). The 21st century challenge for science education: Assessing scientific

reasoning. Thinking Skills and Creativity, 10, 265-279.

doi:10.1016/j.tsc.2013.07.006

Paas, F., Renkl, A., & Sweller, J. (2003). Cognitive load theory and instructional design:

Recent developments. Educational Psychologist, 38(1), 1-4.

Padilla-Walker, L. M. (2006). The impact of daily extra credit quizzes on exam

performance. Teaching of Psychology, 33(4), 236-239.

Texas Tech University, Eevin Jennings, August 2018

77

Palmere, M., Benton, S. L., Glover, J. A., & Ronning, R. R. (1983). Elaboration and

recall of main ideas in prose. Journal of Educational Psychology, 75(6), 898-907.

Pan, S. C., Gopal, A., & Rickard, T. C. (2016). Testing with feedback yields potent, but

piecewise, learning of history and biology facts. Journal of Educational

Psychology, 108(4), 1-13.

Pastötter, B., Schicker, S., Niedernhuber, J., & Bäuml, K.-H. T. (2011). Retrieval during

learning facilitates subsequent memory encoding. Journal of Experimental

Psychology: Learning, Memory, and Cognition, 37(2), 287-297.

Perrig, W., & Kintsch, W. (1985). Propositional and situational representations of text.

Journal of Memory and Language, 24(5), 503-518.

Peterson, D. J., & Mulligan, N. W. (2012). A negative effect of repetition in episodic

memory. Journal of Experimental Psychology: Learning, Memory, and

Cognition, 38(6), 1786-1791.

Peterson, D. J., & Mulligan, N. W. (2013). The negative testing effect and multifactor

account. Journal of Experimental Psychology: Learning, Memory, and Cognition,

39(4), 1287-1293.

Peverly, S. T. (2006). The importance of handwriting speed in adult writing.

Developmental Neuropsychology, 29(1), 197-216.

Peverly, S. T., Brobst, K. E., Graham, M., & Shaw, R. (2003a). College adults are not

good at self-regulation: A study on the relationship of self-regulation, note taking,

and test taking. Journal of Educational Psychology, 95(2), 335-346.

Peverly, S. T., Ramaswamy, V., Brown, C., Sumowski, J., Alidoost, M., & Garner, J.

(2007). What predicts skill in lecture note taking? Journal of Educational

Psychology, 99(1), 167-180.

Peverly, S. T., Vekaria, P. C., Reddington, L. A., Sumowski, J. F., Johnson, K. R., &

Ramsay, C. M. (2013). The relationship of handwriting speed, working memory,

language comprehension and outlines to lecture note‐taking and test‐taking

among college students. Applied Cognitive Psychology, 27(1), 115-126.

Pilegard, C., & Mayer, R. E. (2015). Adding judgments of understanding to the

metacognitive toolbox. Learning and Individual Differences, 41, 62-72.

Piolat, A., Olive, T., & Kellogg, R. T. (2005). Cognitive effort during note taking.

Applied Cognitive Psychology, 19(3), 291-312.

Postman, L., & Underwood, B. J. (1973). Critical issues in interference theory. Memory

& Cognition, 1(1), 19-40.

Texas Tech University, Eevin Jennings, August 2018

78

Prince, M. (2004). Does active learning work? A review of the research. Journal of

Engineering Education, 93(3), 223-231.

Prunuske, A. J., Batzli, J., Howell, E., & Miller, S. (2012). Using online lectures to make

time for active learning. Genetics Society of America, 192(1), 67-72.

doi:10.1534/genetics.112.141754

Pyc, M. A., & Rawson, K. A. (2009). Testing the retrieval effort hypothesis: Does greater

difficulty correctly recalling information lead to higher levels of memory?

Journal of Memory and Language, 60(4), 437-447.

Pyc, M. A., & Rawson, K. A. (2010). Why testing improves memory: Mediator

effectiveness hypothesis. Science, 330(6002), 335-335.

Ransdell, S., Levy, C. M., & Kellogg, R. T. (2002). The structure of writing processes as

revealed by secondary task demands. L1-Educational Studies in Language and

Literature, 2(2), 141-163.

Reddy, B. G., & Bellezza, F. S. (1983). Encoding specificity in free recall. Journal of

Experimental Psychology: Learning, Memory, and Cognition, 9(1), 167-174.

Renkl, A. (1995). Learning for later teaching: An exploration of mediational links

between teaching expectancy and learning results. Learning and Instruction, 5(1),

21-36.

Rickards, J. P., & Friedman, F. (1978). The encoding versus the external storage

hypothesis in note taking. Contemporary Educational Psychology, 3(2), 136-143.

Risko, E. F., Anderson, N., Sarwal, A., Engelhardt, M., & Kingstone, A. (2012).

Everyday attention: Variation in mind wandering and memory in a lecture.

Applied Cognitive Psychology, 26(2), 234-242.

Risko, E. F., & Kingstone, A. (2011). Eyes wide shut: Implied social presence, eye

tracking and attention. Attention, Perception, & Psychophysics, 73(2), 291-296.

Robinson, E. S., & Brown, M. A. (1926). Effect of serial position upon memorization.

The American Journal of Psychology, 37(4), 538-552.

Roediger, H. L., & Butler, A. C. (2011). The critical role of retrieval practice in long-

term retention. Trends in Cognitive Sciences, 15(1), 20-27.

Roediger III, H. L., Agarwal, P. K., McDaniel, M. A., & McDermott, K. B. (2011). Test-

enhanced learning in the classroom: long-term improvements from quizzing.

Journal of Experimental Psychology: Applied, 17(4), 382-395.

Texas Tech University, Eevin Jennings, August 2018

79

Roediger III, H. L., Gallo, D. A., & Geraci, L. (2002). Processing approaches to

cognition: The impetus from the levels-of-processing framework. Memory, 10(5-

6), 319-332.

Roediger III, H. L., & Karpicke, J. D. (2006a). The power of testing memory: Basic

research and implications for educational practice. Perspectives on Psychological

Science, 1(3), 181-210.

Roediger III, H. L., & Karpicke, J. D. (2006b). Test-enhanced learning: Taking memory

tests improves long-term retention. Psychological science, 17(3), 249-255.

Roediger III, H. L., Putnam, A. L., & Smith, M. A. (2011). Ten benefits of testing and

their applications to educational practice. Psychology of Learning and Motivation-

Advances in Research and Theory, 55, 1-36.

Roelle, J., & Berthold, K. (2017). Effects of incorporating retrieval into learning tasks:

The complexity of the tasks matters. Learning and Instruction, 49, 142-156.

Rohrer, D., Taylor, K., & Sholar, B. (2010). Tests enhance the transfer of learning.

Journal of Experimental Psychology: Learning, Memory, and Cognition, 36(1),

233-239.

Roscoe, R. D., & Chi, M. T. (2007). Understanding tutor learning: Knowledge-building

and knowledge-telling in peer tutors’ explanations and questions. Review of

Educational Research, 77(4), 534-574. doi:10.3102/0034654307309920

Ross, S. M., & Di Vesta, F. J. (1976). Oral summary as a review strategy for enhancing

recall of textual material. Journal of Educational Psychology, 68(6), 689-695.

Rowe, M. B. (1980). Pausing principles and their effects on reasoning in science. New

Directions for Community Colleges, 1980(31), 27-34.

Rowe, M. B. (1986). Wait time: Slowing down may be a way of speeding up! Journal of

Teacher Education, 37(1), 43-50.

Rowland, C. A. (2014). The effect of testing versus restudy on retention: A meta-analytic

review of the testing effect. In: American Psychological Association.

Ruhl, K. L., Hughes, C. A., & Gajar, A. H. (1990). Efficacy of the pause procedure for

enhancing learning disabled and nondisabled college students' long-and short-

term recall of facts presented through lecture. Learning Disability Quarterly,

13(1), 55-64.

Ruhl, K. L., Hughes, C. A., & Schloss, P. J. (1987). Using the pause procedure to

enhance lecture recall. Teacher Education and Special Education: The Journal of

Texas Tech University, Eevin Jennings, August 2018

80

the Teacher Education Division of the Council for Exceptional Children, 10(1),

14-18.

Ruhl, K. L., & Suritsky, S. (1995). The pause procedure and/or an outline: Effect on

immediate free recall and lecture notes taken by college students with learning

disabilities. Learning Disability Quarterly, 18(1), 2-11.

Schacter, D. L., & Szpunar, K. K. (2015). Enhancing attention and memory during video-

recorded lectures. Scholarship of Teaching and Learning in Psychology, 1(1), 60-

71.

Schmeck, A., Mayer, R., Opfermann, M., Pfeiffer, V., & Leutner, D. (2014). Drawing

pictures during learning from scientific text: Testing the generative drawing effect

and the prognostic drawing effect. Contemporary Educational Psychology, 39(4),

275-286.

Schraw, G. (1998). Promoting general metacognitive awareness. Instructional Science,

26(1-2), 113-125.

Seli, P., Carriere, J. S., & Smilek, D. (2015). Not all mind wandering is created equal:

Dissociating deliberate from spontaneous mind wandering. Psychological

Research, 79(5), 750-758.

Sitler, H. C. (1997). The spaced lecture. College Teaching, 45(3), 108-110.

Smith, M. C., Theodor, L., & Franklin, P. E. (1983). The relationship between contextual

facilitation and depth of processing. Journal of Experimental Psychology:

Learning, Memory, and Cognition, 9(4), 697-712.

So, H.-J., & Brush, T. A. (2008). Student perceptions of collaborative learning, social

presence and satisfaction in a blended learning environment: Relationships and

critical factors. Computers & Education, 51(1), 318-336.

Son, L. K., & Metcalfe, J. (2000). Metacognitive and control strategies in study-time

allocation. Journal of Experimental Psychology: Learning, Memory, and

Cognition, 26(1), 204-221.

Stacy, E. M., & Cain, J. (2015). Note-taking and handouts in the digital age. American

Journal of Pharmaceutical Education, 79(7), 107.

Sung, E., & Mayer, R. E. (2012). Five facets of social presence in online distance

education. Computers in Human Behavior, 28(5), 1738-1747.

Susskind, J. E. (2008). Limits of PowerPoint’s power: Enhancing students’ self-efficacy

and attitudes but not their behavior. Computers & Education, 50(4), 1228-1239.

doi:10.1016/j.compedu.2006.12.001

Texas Tech University, Eevin Jennings, August 2018

81

Sweller, J. (1994). Cognitive load theory, learning difficulty, and instructional design.

Learning and Instruction, 4(4), 295-312.

Sweller, J. (2010). Element interactivity and intrinsic, extraneous, and germane cognitive

load. Educational Psychology Review, 22(2), 123-138.

Sweller, J., Ayres, P., & Kalyuga, S. (2011). Cognitive load theory. New York, NY:

Springer.

Szpunar, K. K., Jing, H. G., & Schacter, D. L. (2014). Overcoming overconfidence in

learning from video-recorded lectures: Implications of interpolated testing for

online education. Journal of Applied Research in Memory and Cognition, 3(3),

161-164.

Szpunar, K. K., Khan, N. Y., & Schacter, D. L. (2013). Interpolated memory tests reduce

mind wandering and improve learning of online lectures. Proceedings of the

National Academy of Sciences, 110(16), 6313-6317.

Szpunar, K. K., McDermott, K. B., & Roediger, H. L. (2007). Expectation of a final

cumulative test enhances long-term retention. Memory & Cognition, 35(5), 1007-

1013.

Szpunar, K. K., McDermott, K. B., & Roediger III, H. L. (2008). Testing during study

insulates against the buildup of proactive interference. Journal of Experimental

Psychology: Learning, Memory, and Cognition, 34(6), 1392-1399.

Szpunar, K. K., Moulton, S. T., & Schacter, D. L. (2013). Mind wandering and

education: from the classroom to online learning. Frontiers in Psychology, 4, 495.

Taraban, R., Maki, W. S., & Rynearson, K. (1999). Measuring study time distributions:

Implications for designing computer-based courses. Behavior Research Methods,

Instruments, & Computers, 31(2), 263-269.

Thomas, A. K., & Mcdaniel, M. A. (2007). Metacomprehension for educationally

relevant materials: Dramatic effects of encoding-retrieval interactions.

Psychonomic Bulletin & Review, 14(2), 212-218.

Thomson, D. M., & Tulving, E. (1970). Associative encoding and retrieval: Weak and

strong cues. Journal of Experimental Psychology, 86(2), 255-262.

Titsworth, B. S., & Kiewra, K. A. (2004). Spoken organizational lecture cues and student

notetaking as facilitators of student learning. Contemporary Educational

Psychology, 29(4), 447-461. doi:10.1016/j.cedpsych.2003.12.001

Toppino, T. C., & Cohen, M. S. (2009). The testing effect and the retention interval:

Questions and answers. Experimental Psychology, 56(4), 252-257.

Texas Tech University, Eevin Jennings, August 2018

82

Trafton, J. G., & Trickett, S. B. (2001). Note-taking for self-explanation and problem

solving. Human-Computer Interaction, 16(1), 1-38.

Tran, R., Rohrer, D., & Pashler, H. (2015). Retrieval practice: The lack of transfer to

deductive inferences. Psychonomic Bulletin & Review, 22(1), 135-140.

doi:10.3758/s13423-014-0646-x

Tucha, O., Mecklinger, L., Walitza, S., & Lange, K. W. (2006). Attention and movement

execution during handwriting. Human Movement Science, 25(4), 536-552.

Tulving, E. (1967). The effects of presentation and recall of material in free-recall

learning. Journal of Verbal Learning and Verbal Behavior, 6(2), 175-184.

Tulving, E., & Pearlstone, Z. (1966). Availability versus accessibility of information in

memory for words. Journal of Verbal Learning and Verbal Behavior, 5(4), 381-

391.

Tyler, S. W., Hertel, P. T., McCallum, M. C., & Ellis, H. C. (1979). Cognitive effort and

memory. Journal of Experimental Psychology: Human Learning and Memory,

5(6), 607-617.

Uner, O., & Roediger III, H. L. (2017). The effect of question placement on learning

from textbook chapters. Journal of Applied Research in Memory and Cognition,

7, 116-122.

Van Gog, T., & Sweller, J. (2015). Not new, but nearly forgotten: The testing effect

decreases or even disappears as the complexity of learning materials increases.

Educational Psychology Review, 27(2), 247-264. doi:10.1007/s10648-015-9310-x

Veletsianos, G., Collier, A., & Schneider, E. (2015). Digging deeper into learners'

experiences in MOOCs: Participation in social networks outside of MOOCs,

notetaking and contexts surrounding content consumption. British Journal of

Educational Technology, 46(3), 570-587.

Vellutino, F. R., Scanlon, D. M., & Tanzman, M. S. (1994). Components of reading

ability: Issues and problems in operationalizing word identification, phonological

coding, and orthographic coding. In G. R. Lyon (Ed.), Frames of reference for the

assessment of learning disabilities: New views on measurement issues (pp. 279-

332). Baltimore, MD: Paul H Brookes Publishing.

Wahlheim, C. N. (2015). Testing can counteract proactive interference by integrating

competing information. Memory & Cognition, 43(1), 27-38.

Texas Tech University, Eevin Jennings, August 2018

83

Wang, S.-K., & Hsu, H.-Y. (2008). Use of the webinar tool (Elluminate) to support

training: The effects of webinar-learning implementation from student-trainers’

perspective. Journal of Interactive Online Learning, 7(3), 175-194.

Watkins, O. C., & Watkins, M. J. (1975). Buildup of proactive inhibition as a cue-

overload effect. Journal of Experimental Psychology: Human Learning and

Memory, 1(4), 442-452.

Weinstein, Y., Gilmore, A. W., Szpunar, K. K., & McDermott, K. B. (2014). The role of

test expectancy in the build-up of proactive interference in long-term memory.

Journal of Experimental Psychology: Learning, Memory, and Cognition, 40(4),

1039-1048.

Weiss, I. R., Pasley, J. D., Smith, P. S., Banilower, E. R., & Heck, D. J. (2003). Looking

inside the classroom. Chapel Hill, NC: Horizon Research Inc.

Wheeler, M., Ewers, M., & Buonanno, J. (2003). Different rates of forgetting following

study versus test trials. Memory, 11(6), 571-580.

Williams, J. L., McCarley, N. G., Haynes, J. M., Williams, E. H., Whetzel, T., Reilly, T.,

. . . Bailey, L. (2016). The use of feedback to help college students identify

relevant information on PowerPoint slides. North American Journal of

Psychology, 18(2), 239-256.

Williams, R. L., & Eggert, A. C. (2002). Notetaking in college classes: Student patterns

and instructional strategies. The Journal of General Education, 51(3), 173-199.

doi:10.1353/jge.2003.0006

Wilson, K., & Korn, J. H. (2007). Attention during lectures: Beyond ten minutes.

Teaching of Psychology, 34(2), 85-89.

Winne, P. H., & Hadwin, A. F. (1998). Studying as self-regulated learning.

Metacognition in Educational Theory and Practice, 93, 27-30.

Winstanley, P. A. d. (1996). Generation effects and the lack thereof: The role of transfer-

appropriate processing. Memory, 4(1), 31-48.

Wissman, K. T., & Rawson, K. A. (2015). Grain size of recall practice for lengthy text

material: Fragile and mysterious effects on memory. Journal of Experimental

Psychology: Learning, Memory, and Cognition, 41(2), 439-455.

Wissman, K. T., Rawson, K. A., & Pyc, M. A. (2011). The interim test effect: Testing

prior material can facilitate the learning of new material. Psychonomic Bulletin &

Review, 18(6), 1140-1147.

Texas Tech University, Eevin Jennings, August 2018

84

Wittrock, M. C., Marks, C., & Doctorow, M. (1975). Reading as a generative process.

Journal of Educational Psychology, 67(4), 484.

Wixted, J. T. (2004). The psychology and neuroscience of forgetting. Annual Review of

Psychology, 55, 235-269.

Wixted, J. T., & Rohrer, D. (1993). Proactive interference and the dynamics of free

recall. Journal of Experimental Psychology: Learning, Memory, and Cognition,

19(5), 1024-1039.

Wong, L. (2014). Essential study skills. Stamford, CT: Cengage Learning.

Wooldridge, C. L., Bugg, J. M., McDaniel, M. A., & Liu, Y. (2014). The testing effect

with authentic educational materials: A cautionary note. Journal of Applied

Research in Memory and Cognition, 3(3), 214-221.

doi:10.1016/j.jarmac.2014.07.001

Yaffee, R. A. (2002). Robust regression analysis: some popular statistical package

options. Social Science and Mapping Group, 23, 12. Retrieved from

http://web.ipac.caltech.edu/staff/fmasci/home/astro_refs/RobustRegAnalysis.pdf

Yang, C., Potts, R., & Shanks, D. R. (2017). The forward testing effect on self-regulated

study time allocation and metamemory monitoring. Journal of Experimental

Psychology: Applied, 23(3), 263-277. doi:10.1037/xap0000122

Yang, C., & Shanks, D. R. (2017). The forward testing effect: Interim testing enhances

inductive learning. Journal of Experimental Psychology: Learning, Memory, and

Cognition, 44(3), 485-492. doi:/10.1037/xlm0000449

Yin, Y., Vanides, J., Ruiz‐Primo, M. A., Ayala, C. C., & Shavelson, R. J. (2005).

Comparison of two concept‐mapping techniques: Implications for scoring,

interpretation, and use. Journal of Research in Science Teaching, 42(2), 166-184.

Yu-hui, L., Li-rong, Z., & Yue, N. (2010). Application of schema theory in teaching

college English reading. Canadian Social Science, 6(1), 59-65.

Zaromb, F. M., & Roediger, H. L. (2010). The testing effect in free recall is associated

with enhanced organizational processes. Memory & Cognition, 38(8), 995-1008.

Zhang, D., Zhao, J. L., Zhou, L., & Nunamaker Jr, J. F. (2004). Can e-learning replace

classroom learning? Communications of the ACM, 47(5), 75-79.

Texas Tech University, Eevin Jennings, August 2018

85

APPENDICES

APPENDIX A

EXTENDED LITERATURE REVIEW

Long before the instantiation of college, speakers stood before listening crowds to

convey a message. In The Akademia, beneath the shade of Athena’s olive trees, Plato and

fellow guilders discussed humanity’s existential origins (Dancy, 1991). Today, the olive

grove has been replaced by industrialized estates sprawling into hundreds of acres, Plato

by doctoral scholars, the scripture by PowerPoint, and the guilders by masses of young

adults bejeweled with data supplies inconceivable 380 years BC.

Yet, The Akademia’s spirit resonates deeply.

Through the lecture halls echo curious voices, tenors and altos bantering in

inquisition. Teachers, pupils, and apprentices strive to exchange minds. Learners fill the

halls flipping through pages of notes, fingertips pattering across keyboards. Although its

form may have changed over the centuries, the universitas magistrorum et scholarium, or

“community of masters and scholars” (Monroe, 1916) is extraordinarily alive today.

The lecture continues to stand as an integral element in academia (Zhang, Zhao,

Zhou, & Nunamaker Jr, 2004). Regardless of their format (online versus face-to-face),

lectures offer unique benefits to learners, such as the addition of episodic, context-based

cues to aid content encoding (Lehman, Smith, & Karpicke, 2014). The purpose of the

lecture has remained relatively unchanged over the course of a century, in contrast to the

rapidly-evolving student population and its shifting educational goals, strategies,

behaviors, and beliefs. To accommodate these changes, instructors now frequently

Texas Tech University, Eevin Jennings, August 2018

86

present lectures in video format, a practice that has quickly gained attention due to not

only its practicality, but also the additional challenges that accompany it. How, then, can

instructors continue to facilitate learning from video lectures?

The scope of this dissertation is to examine several factors that relate to learning

outcomes from webinar-style video lectures. In order to fully understand the primary

components of interest (lecture type, learning activities, and notetaking behaviors), an

extension of theories and applications are explained next.

Learning from Lectures

Assessments toward best practices for student learning are certainly not new. One

of the first researchers to directly examine the educational benefits of the lecture was

Corey (1934), who addressed the growing scarcity of lectures still serving their original,

unique, dialectical purpose. Rather than disseminating information, content masters had

to adapt their approaches to fill a new void: encouraging their students to actively learn

the material that had become superfluously accessible in light of technological

advancements. Corey aptly states:

Information in permanent form has accumulated so rapidly and is so readily

available that university students are no longer dependent upon a faculty for

intellectual nourishment in the same sense as they are for stimulation and

guidance… It has been said that the lecturer serves to animate his subject-matter,

that he can intersperse his recitation of facts with sparkling wit and interesting

current illustrations. (p. 160-161)

Evolution of the lecture has been a response to, or in some cases an initiation of,

transformations in the student populace. Both the number and type of student enrollment

have changed over the last century alongside degree-related career requirements

Texas Tech University, Eevin Jennings, August 2018

87

(Krathwohl, 2002). Per Corey’s (1934) reference to the lecture’s efficacy over time,

human learning and memory theories have since expanded to include updated educational

mechanisms; specifically, several primary theories pertain to student learning and

memory in “the digital age” (Stacy & Cain, 2015). In the next section, the primary theory

for the dissertation, the interactive-constructive-active-passive theory (Chi & Wylie,

2014) is described and justified in contrast to two other prominent learning theories.

Interactive-Constructive-Active-Passive (ICAP) Taxonomy

The ICAP theory stands as a parsimonious, cognitive and behaviorally-based,

cause-and-effect explanation for learning. Given that learning from lectures is a dynamic

process, the grain size of ICAP focuses less on the instructor and more on the cognitive

progressions that covert behaviors evoke. Understanding the downstream, implicit effects

of a specific learning activity, such as revising one’s notes, can inform our awareness of

what to expect of each learner as well as how to appropriately measure different types of

learning. In an effort to uncover these processes and their subsequent effects, ICAP is

characterized by an associative network of engagement modes, covert activities to elicit

these modes, subsequent knowledge-changes, and predicted cognitive outcomes as a

result. Each of these facets will be explained next (see Figure A1).

Texas Tech University, Eevin Jennings, August 2018

88

Figure A1. Adapted figure from Chi & Wiley, 2014, displaying engagement modes,

activities, knowledge-changes, expected cognitive outcomes, and expected learning

outcomes in the ICAP framework.

Per its name, ICAP features four major engagement modes to classify overt

activities based on the types of cognitive activities they elicit. The most basic of these

engagement modes is the passive category. Passive engagement is behaviorally

characterized receiving information without prioritizing any of it. Passive engagement

involves holistically (or absently) processing a set of data. A classic example is listening

to a lecture without notetaking nor focusing on the important points, which typically

results in poorer memory performance when compared with other engagement modes

(Trafton & Trickett, 2001). When students utilize passive modes of engagement, they

often fail to integrate new information with prior knowledge and thus rely on external

cues to activate that information.

Texas Tech University, Eevin Jennings, August 2018

89

Contrary to the popular catch-all term “active learning,” the active mode of

engagement focuses on the cognitive mechanism behind an activity. In contrast, “active

learning” is a general term for instances in which the learner employs any action beyond

simply receiving the information (i.e., passively) with the intention to improve

performance. The advantages of ICAP are the more exact descriptions and predicted

outcomes based on the mechanism at work. An active mode of engagement asserts that

the learner’s attention will be differentially focused on separate learning items,

strengthening the learner’s schema by integrating different concepts into a more coherent

narrative (Bartlett, 1958; Conati & Carenini, 2001). Subjects who perform active

engagement often demonstrate superficial conception, such as the ability to compare and

contrast concepts.

From active engagement stems constructive, which, similarly to active, has

become a contrived idiom to encompass any type of behavioral or cognitive activity in

which a learner discovers or pieces together a concept. In the ICAP taxonomy,

constructive engagement is instead the creation of novel concepts rather than integrating

separate ones. A classic standard for constructive engagement is concept mapping (Yin,

Vanides, Ruiz‐Primo, Ayala, & Shavelson, 2005). In this activity, learners must innovate

relationships between concepts. Generating these connections, along with self-made

explanations driving them, is hypothesized to promote schema assimilation. It is with

such inference that application of knowledge to new contexts can become possible, such

as transfer.

Texas Tech University, Eevin Jennings, August 2018

90

Finally, interactive engagement concerns knowledge building as a function of

exchanging information with a partner. The goal of interactive engagement is to create a

coherent model of the content that is built by each partner’s unique contributions. Under

ideal circumstances, both partners co-construct novel mental representations that can then

be applied inventively. For example, co-constructing a concept map facilitates deeper,

more expansive knowledge than constructing a concept map alone (Czerniak & Haney,

1998). The key to interactive engagement resides in the critical pieces each partner lends,

which would otherwise be unattainable and possibly result in less robust comprehension.

Each mode is hierarchically elevated above its predecessor. That is, the active

mode is characterized not only by its individual features, such as manipulating

information, but also by the necessary qualities of the passive mode (i.e., listening to the

lecture is required in order to take selective lecture notes). The constructive mode

encapsulates both passive and active characteristics, and builds upon them its unique

contribution of knowledge generation (i.e., self-explanations are generated throughout the

selective notes a student takes while listening to a lecture). Essentially, a gain of

approximately 8% to 10% is achieved per each level (Menekse et al., 2013) (see Figure

A2).

Texas Tech University, Eevin Jennings, August 2018

91

Figure A2. Memory performance as a function of ICAP engagement mode, from

Menekse et al., 2013.

Two additional theories are frequently used for educational improvement. Like

the ICAP taxonomy, these theories are learner-centered and focus on best practices for

educational design. The first theory, generative or constructivist processing, aims to

differentiate between meaningful and rote learning. Depending on the source,

“constructivist” and “generative” learning theories are largely interchangeable. Most

notably, Mayer describes “meaningful learning” or “generative learning” as a

combination of actions, including the selection, organization, and integration (SOI) of

new information with prior knowledge (R. E. Mayer & Moreno, 2003). For the purposes

of brevity, both terms will be referenced on the basis of “generative” theory.

The premise of generative learning theory is, like ICAP, to provide demarcation

between types and assessments of learning. Of greatest interest here is the contrast

between meaningful and rote learning (R. E. Mayer, 2002). For learning to be

Texas Tech University, Eevin Jennings, August 2018

92

meaningful, it should involve generation of comprehension, or mental effort, from the

learner. In this vein, Mayer coined the specific theory targeting how learners can best

absorb processes involving multimedia administration; a sub-theory of specific

importance for collegiate lecture learning pertaining to the pedagogical use of

multimedia, known as the cognitive theory of multimedia learning (R. Mayer, 2005). This

theory in particular operates out of difficulties incurred through instructional design, most

notably Power Point used for procedural cause-and-effect memory. Multimedia theory is

predicated by more generic research targeting cognitive load (Paas et al., 2003; Sweller,

1994; Sweller et al., 2011), and serves in many instances as a guide for instructors on

how to properly build their lectures to avoid working memory overload (R. E. Mayer &

Moreno, 2003; Sweller, 2010). In contrast to cognitive load theory, which asserts that

more effort is negatively related to learning, ICAP proposes the opposite (within reason).

That is, the addition of appropriately-themed cognitive effort acts as a desirable

difficulty, imparting necessary exertion forging connections that would otherwise fade

(Bjork & Bjork, 2011). Evidence for this claim can be observed in Figure A2, in which

the engagement methods’ hierarchy demonstrates an advantage for each additional level.

Further, one of the primary curricular standards is application of concepts rather

than simple retention. Therefore, differentiating between activities and measurements

associated with each construct is critical. Rote learning aligns similarly to ICAP’s passive

and active modes of engagement. Specifically, rote learning is the retention of

information without the ability to apply it or relate it in any external manner. Meaningful

learning, in contrast, allows for conceptual understanding, manipulation, and extension

Texas Tech University, Eevin Jennings, August 2018

93

outside of the learning parameters. One benefit observed in relation to ICAP is that

“meaningful” is divided into the two sub-components constructive and interactive, both

of which encompass the cognitive mechanisms occurring during each “meaningful”

learning explanation.

A second, important similar theory that has been widely adopted throughout the

world is Bloom’s revised taxonomy. There are many useful convolutions in Bloom’s

framework; the taxonomy is learner-focused, allows users to narrow their activities down

to the type of goal they aim to achieve, and allows for degrees of freedom in cognitive

justification (L. W. Anderson et al., 2001; Forehand, 2010; Krathwohl, 2002). However,

ICAP caters more aptly to the needs of the dissertation in that it is more cognition-based

and parsimonious than is Bloom’s taxonomy. As Chi and Wiley (2014) assert:

The major characteristic difference between Bloom’s taxonomy and the ICAP

taxonomy is that Bloom’s taxonomy focuses its users on their instructional goals

and how to measure whether the goal has been achieved, whereas ICAP focuses

its users on the means for achieving the instructional goals. Because one

framework focuses on ends and the other on means, the two frameworks are

complementary. (p. 240)

All of these advantages aside, ICAP still hosts several weaknesses. There are

three critical boundary conditions when incorporating ICAP taxonomy to learning

environments. The first boundary condition concerns assessment methods. Simply,

assessment methods must be appropriate for the given activity and expected engagement

mode. To demonstrate, comparing knowledge-changes between two activities that elicit

passive engagement with measurements of far transfer may not yield any results.

Vigilance in utilizing an appropriate measurement system based on the engagement

method is critical in making sense of the resulting performance.

Texas Tech University, Eevin Jennings, August 2018

94

The second boundary concerns the “domain” in which the activity is

operationalized. Essentially, some activities may yield null or impaired performance

because they were not mechanistically fit for the constraints of that domain. An example

provided in Chi and Wylie (2014) describes how self-explanation, an effective

constructive engagement method, may not be helpful in circumstances in which the topic

domain is simply too complex for students’ levels of comprehension. Therefore, it is not

only imperative to adopt an appropriate measurement to assess an activity’s efficacy, but

equally important is ensuring that the activity is practical given the environmental,

learner, and goal-based constraints.

Finally, “task differences within a mode” should be considered when interpreting

learning outcomes based on ICAP taxonomy. Essentially, each mode of engagement is

not meant to be entirely orthogonal; as a result, two activities qualified under a single

engagement mode (i.e., two active engagement activities) are expected to produce

equivalent results, but may also produce significantly different outcomes. This

unexpected result can, at least in part, be attributed to variance associated with learners’

experiences, covert learning strategies, and processing modes. Conversely, activities

derived from two seemingly distinct categories (i.e., passive and active) may yield

equivalent or opposite results in some cases due to similar underpinnings. In sum, as with

any major theory, further testing is warranted to establish the peripheries of learning

activities.

Texas Tech University, Eevin Jennings, August 2018

95

Learning from Video Lectures

The concept that learners’ academic outcomes are guided by multiple variables is

not new. Influences range from perceived self-efficacy, working memory capacity,

orthographic skills, and metacognitive strategy, to factors as detailed as seating

preference within a classroom (Entwistle, McCune, & Hounsell, 2002; Kane & Engle,

2000; Lusk et al., 2009; Smith, Theodor, & Franklin, 1983; Susskind, 2008; Veletsianos,

Collier, & Schneider, 2015) (see Figure A3). Since the internet became an integral part of

everyday life, course accessibility also followed this notion. Today, alternative lecture

formats and their various constituents flood pedagogical suggestions.

Figure A3. Model of factors that contribute to students’ learning outcomes. From

Entwistle et al. (2002).

In recent years, the flipped, blended, and hybrid structures have sensationalized

the standard “classroom” (Aly, 2013; Coats, 2016; DeLozier & Rhodes, 2017; O'Flaherty

& Phillips, 2015; So & Brush, 2008). In nearly all of these platforms, the video lecture is

utilized. It is also used frequently for non- course-related content, such as professional

development seminars (Breslow et al., 2013). Video lectures are intended to have

Texas Tech University, Eevin Jennings, August 2018

96

multiple purposes to improve education. For example, viewing a lecture from home could

alleviate problems related to travel or attendance. Video lectures operate similarly to

textbooks in that they can be stopped, re-played, and attended at individualized paces.

Research from the hybrid lecture literature shows that video lectures viewed outside of

class free up time during scheduled class meetings to focus more on conceptual

understanding and application (Prunuske, Batzli, Howell, & Miller, 2012).

Although beneficial for many students, pausable videos also introduce their own

issues. For example, many students will fast-forward through the lectures to find answers

to homework questions, or they will lose focus similarly to those who struggle in class. In

this sense, many of the unique benefits from lectures are lost. This is a major inspiration

behind live-webinar classes. Unlike traditional video lectures, webinars are released

“live” or under the constraint of one continuous viewing session. Webinar classes deliver

the immediacy of face-to-face lectures, but are viewable from home. They serve as an

intersection between physical lecture attendance and allowing for accessibility from

anywhere (Wang & Hsu, 2008).

One key area of research serves to identify which learning issues persist in

webinar lectures. What attentional and processing errors might students incur that are

similar to face-to-face, versus those that are unique to webinar situations? The most

pertinent issues in relation to the dissertation are primarily proactive interference, mind-

wandering, and struggles encountered from notetaking. Each of these errors are described

next.

Texas Tech University, Eevin Jennings, August 2018

97

Proactive Interference

When the lecture begins, students activate a lecture-based schema (Yu-hui, Li-

rong, & Yue, 2010). As the instructor begins, students’ cognitive resources are still

available. They are able to integrate the incoming information to their current schema and

even assimilate some of it. However, due to the nature of information processing, these

resources are quickly depleted (Mayer & Moreno, 2003; Sweller, 1994). This issue is

multiplied when the information is novel, complex, and/or delivered at a fast rate (Aiken

et al., 1975). Unless learners can implement and sustain an advanced learning strategy, or

possess extensive background knowledge, it is fair to assume that they will fall victim to

working memory overload at some point during a college-level science lecture.

In a study investigating the electrophysiological correlates of cognitive load,

Pastötter, Schicker, Niedernhuber, and Bäuml (2011) measured alpha-power dynamics in

relation to list-learning. Importantly, as lists were continuously presented, participants’

alpha power activity increased dramatically over the course of list learning, which

matched previous studies’ suggestions that alpha power output is associated with

cognitive load. Results of the study showed that only the lists from the beginning of the

sequence were remembered, confirming the hypothesis that continuous presentation

produces proactive interference.

Naturally, there are cases in which proactive interference can be overridden, but

these instances are rare. Even when not encumbered with additional learning activities, it

is difficult for the average brain to permanently encode, overwrite, and alter budding

schemas. Individual differences such as working memory capacity have been linked to

Texas Tech University, Eevin Jennings, August 2018

98

the ability to overcome proactive interference, and even benefit from poorly-designed

lectures (Aiken et al., 1975). In other cases, background knowledge and expertise results

in decreased proactive interference due to a pre-established, richer mental representation

(McEldoon, Durkin, & Rittle‐Johnson, 2013). Thus, individual differences such as

working memory capacity, interest, and background knowledge can greatly factor into

how students are able to assimilate novel concepts during a continuous lecture.

Mind-wandering

Attention during continuous lectures, across any topic and platform, begins to

wane after about 10 minutes (Hartley & Davies, 1978). It then picks up again the last 5 or

so minutes of the lecture as students prepare to end the session. The middle portion of the

lecture is ultimately lost after a delay, which explains the resiliency of primacy and

recency effects (Holen & Oaster, 1976; Robinson & Brown, 1926). When combined with

an online platform and no direct repercussions of engaging in non-academic behaviors,

attention becomes even more likely to drift toward other stimuli. Even aside from

external distractions, such as surfing the internet or texting, the fact that learners cannot

be “called upon” during lecture makes mind-wandering nearly impossible to avoid.

Higher rates of non-academic mind-wandering are associated with decreased memory

and comprehension for lecture material (Risko et al., 2012; Szpunar, Moulton, et al.,

2013), so a prerogative of webinar-learning research is to reduce these instances.

Notetaking

A majority of students report that they take notes in class (Bonner & Holliday,

2006) and that notetaking is important for learning in college (R. L. Williams & Eggert,

Texas Tech University, Eevin Jennings, August 2018

99

2002). To understand how notetaking contributes to learning, the next section will

describe the various factors associated with notetaking and its outcomes.

Notetaking serves two benefits. The first benefit is the encoding mechanism,

which is a memory advantage derived from the act of notetaking in itself (Di Vesta &

Gray, 1972). The encoding function, under certain circumstances, yields higher recall

than just listening (Kiewra, 1989). This result is supposedly due to the selection,

evaluation, and organization of incoming information (Fisher & Harris, 1973; Peverly,

2006; Peverly et al., 2007), similarly to active engagement modes in ICAP. Some

research extends the encoding benefit to be caused by adding associations, inferences,

and generative processes that may otherwise go unfounded without the necessities

required of transcription. The act of notetaking while listening, essentially, elicits

desirable difficulties that ultimately benefit memory more so than passive (listening)

types of engagement (Bjork & Bjork, 2011; Metcalfe, 2011; R. L. Williams & Eggert,

2002).

What factors predict the utility of the encoding effect? Transcription fluency has

recently resurfaced as one of the more critical components of individual notetaking.

Specifically, transcription fluency is the speed at which someone can perceive

information, encode it, and transcribe it onto paper (Ransdell, Levy, & Kellogg, 2002).

Higher handwriting speed has been associated with higher quality recall and

comprehension (Berninger et al., 1997; Peverly et al., 2013). Specifically, the encoding

mechanism of note-taking concerns the action or presence of scribing notes, whereas

transcription fluency incorporates the element of the rate at which information can be

Texas Tech University, Eevin Jennings, August 2018

100

transcribed, and has been studied in its relation to essay quality and note-taking (Connelly

et al., 2005; Peverly et al., 2013). Like the mechanisms of note-taking, transcription

fluency is also comprised of two components, which are described next.

The first constituent of transcription fluency is the fine motor component. This

element involves the actual, mechanical planning and production of letters ((Peverly,

2006; Peverly, Brobst, Graham, & Shaw, 2003a; Peverly et al., 2007; Peverly et al.,

2013), which is hypothesized to be related to individuals’ physical writing skills. This is

partially explained by slower handwriting speed in children compared to adults (Oliver,

1990; Tucha, Mecklinger, Walitza, & Lange, 2006). Because adults have had more

experience with handwriting, they are faster at executing any writing-based activity

compared to children. The component is of particular importance when assessing

students’ physical capacities for notetaking during lecture, a problem that has surfaced

regarding students’ slow handwriting speeds (Connelly et al., 2005; Haas, 1989). This

issue is addressed separately from the cognitive component of transcription, which is

orthographic coding.

The orthographic coding aspect of transcription fluency concerns the speed at

which an individual can access verbal codes, such as phonetics of letters and their

combinations (Vellutino, Scanlon, & Tanzman, 1994). Although originally researched in

dyslexia and dysgraphia literature, the orthographic component has become a major focus

in assessing individuals’ notetaking capacities. Specifically, some students may not

struggle with the mechanics of transcription, but may instead have trouble assigning the

phonetic and symbolic codes to the sounds they hear. This issue requires a different

Texas Tech University, Eevin Jennings, August 2018

101

approach for both learning and subsequent teaching methods, ultimately shaping the

degree to which students may need additional help transferring lecture content into

written format.

Together, fine motor control and orthographic coding influence overall

transcription fluency. In turn, transcription fluency predicts note quality, or the number of

ideas students are able to transcribe in their notes under time constraints (Peverly et al.,

2013). According to Peverly’s set of experiments, note quality predicts test performance

(2013). This assumption was predicated by an earlier experiment in which transcription

was constrained based on lecture speed (Aiken et al., 1975). While listening to a lecture

participants either took notes by hand as the lecturer spoke (“parallel notes”), took notes

during note-taking “spaces” in dedicated pauses between segmented parts in the lecture

(“spaced notes”), or listened. Lecture speed was manipulated so that the lecturer either

spoke at 120 words per minute (“normal speed”) or 240 words per minute (“speeded”).

Under normal lecture parameters, participants who took notes while listening recalled just

as many informational units as those who only listened, while the spaced condition

recalled significantly more. However, as the speed increased, those who listened and

simultaneously took notes quickly fell behind the spaced note-takers and the listeners.

This shows that transcription fluency and memory may actually be mediated by

the individual speed at which instructors lecture, which indicates that the encoding

function of hand-written note-taking may only be beneficial to a point: when the lecture

speed does not significantly exceed students’ processing abilities. While struggling to

transcribe the meanings of novel concepts, students may completely miss important

Texas Tech University, Eevin Jennings, August 2018

102

examples or clarifications. Rote transcription may suffer along with some comprehension

of the material, generative processing, and elaboration (Dornisch, Sperling, & Zeruth,

2011; Palmere, Benton, Glover, & Ronning, 1983; Wittrock, Marks, & Doctorow, 1975),

which could then yield meager, futile notes.

Although this finding seems intuitive, what it revealed is that when handwriters

suffer from cognitive load during lecture, notetaking becomes detrimental. A conundrum

arises in that notes are important in the encoding function, so one purpose of the current

study is to find a way to improve students’ notes without sacrificing their lecture

comprehension. While many studies have demonstrated that taking notes results in

enhanced memory compared to just listening (Di Vesta & Gray, 1972), other studies have

failed to demonstrate an encoding effect (Carter & Van Matre, 1975). Therefore, a global

concern shared in this dissertation is how instructors can engage students during

notetaking while also allowing for transcription benefits (Katayama & Robinson, 2000).

The second benefit, the external storage function, shows that studying notes

enhances memory more than not studying (Knight & McKelvie, 1986; Rickards &

Friedman, 1978). Memory benefits most when students are able to engage in both (Carter

& Van Matre, 1975; Hartley & Davies, 1978). The external storage function, throughout

decades of testing under many parameters, stands as the most robust benefit resulting

from notetaking (Bui & Myerson, 2014). This is largely due to the predictable memory

decay that occurs after lecture, resulting in the retention of the “gists” and overarching

scope of the content (Kintsch, 1988). Most notetakers in normal circumstances are able to

retain the general concepts of the previous lecture, and so memory is enhanced when

Texas Tech University, Eevin Jennings, August 2018

103

notes contain details and ideas that enhance the students’ mental representation of the

lecture’s structure (Kintsch, 1974, 1994; Mannes & Kintsch, 1987; Perrig & Kintsch,

1985). Reviewing notes undoubtedly improves memory, and as such is not a major focus

in the current experiment.

That notetaking is generative (or constructive) in nature is an assumption based

on ideal notetaking strategies. In actuality, whether individual notetaking or studying is

passive, active, or constructive depends on what strategy students use. For example, if

students take notes by simply transcribing the lecture verbatim, notetaking could be

qualified as a passive activity. However, notes are taken selectively and further

annotated, notetaking could then qualify as active or constructive (depending on the types

of annotations). For example, free-form notetaking produced better learning than cutting

and pasting certain parts of a text (Bauer & Koedinger, 2007). Thus, in order to correctly

identify whether notetaking elicits a particular engagement mode, it is imperative to

examine the notes themselves as well as varieties of learning outcomes (see Figure A4).

Texas Tech University, Eevin Jennings, August 2018

104

Figure A4. Compilation of various notetaking manipulations as a function of the

hypothesized engagement mode.

Notetaking and note studying are of significant importance in content learning

(Benton et al., 1993; Kiewra, Dubois, et al., 1991), but there are still some troubles

encountered with learning while notetaking (the encoding mechanism).

Counterintuitively to the initial research from Di Vesta and Gray (1972), notetaking can

obstruct lecture learning (Aiken et al., 1975; Bui & Myerson, 2014). Notetaking alone is,

in some cases, more cognitively demanding than essay-writing (Piolat et al., 2005)and

requires significant mental effort (Connelly et al., 2005; Peverly, 2006; Peverly et al.,

2007; Peverly et al., 2013).

In sum, students must engage in several processes simultaneously. From

comprehending spoken words to selecting, organizing, and then transcribing lecture

material in a timely manner (Peverly et al., 2013), it is common for many students to

encounter severe degrees of cognitive load (Aiken et al., 1975; Bretzing & Kulhavy,

1979; Piolat et al., 2005). Additionally, students today are less adept at employing self-

regulatory learning behaviors during notetaking (Peverly et al., 2003b) and are less

Texas Tech University, Eevin Jennings, August 2018

105

physiologically capable of overcoming these challenges without instructional help

(Bassili & Joordens, 2008; Luo et al., 2016). Further, most students are only able to

transcribe 22 words per minute (by hand) to 33 words per minute (on a keyboard) (Bui et

al., 2013; Luo et al., 2016) and encounter fatigue effects early on in the lecture (Hartley

& Davies, 1978). This results in sparse notes containing only around 35% of the lecture’s

points (Kiewra, 1985; Kiewra, Mayer, et al., 1991; Luo et al., 2016). Lastly, the

likelihood that students will remember information outside of what they transcribed into

their notes is next to none (Bui & Myerson, 2014; Peverly et al., 2003b; Peverly et al.,

2013).

Since ICAP proposes that more meaningful processing results in better

comprehension, and processing tends to be reduced with notetaking, it makes sense to

conclude that students who either take very few notes due to fatigue or resort to verbatim

transcription engage in passive learning (R. C. Anderson, 1972; Bretzing & Kulhavy,

1979). Indeed, some studies have demonstrated that verbatim notetaking is negatively

predictive of some types of memory performance (Mueller & Oppenheimer, 2014;

Titsworth & Kiewra, 2004). The puzzle of notetaking necessity in exchange for lecture

comprehension presents a unique challenge for video lecture learning, especially in

consideration of the other processing complications students experience when unable to

pause or rewind a video, as in live webinars. Therefore, it is imperative to examine other

notetaking components to facilitate active, or even constructive, engagement during

lecture.

Texas Tech University, Eevin Jennings, August 2018

106

Peer Involvement

Since interpolated note revision increased learning outcomes (Luo et al., 2016),

and a teaching expectancy can invoke meaningful, constructive processing (Bargh &

Schul, 1980; Renkl, 1995), taking and revising notes for others could invite learners to

identify the organizational differences in lecture points more so than self-testing.

Peer involvement is postulated to evoke deeper processing for lecture content. In

some studies, students who are informed that they will have to teach the material to

another student (a paradigm known as the teaching expectancy frame) remember more

information than students who prepare to test over it (Fiorella & Mayer, 2013, 2014).

Nestojko et al. (2014) found consistent teaching expectancy effects for free recall, short-

answer performance, organizational structure, content level (main ideas), and use of

efficient study strategies compared to participants who expected a test. In another study,

students learned more when they prepared information for others rather than for

themselves (Doymus, 2008). This effect is hypothesized to be driven by beneficial

encoding activities used by teachers, such as “generative processing,” focusing on key

points, summarization, and seeking relationships among ideas (Bargh & Schul, 1980;

Fiorella & Mayer, 2013; McKeachie, 1987). At the very least, when utilized adequately,

the teaching expectancy frame results in active modes of engagement.

Luo et al. (2016) found that interpolated note revision with a partner produced the

best learning outcomes compared to those who reviewed alone or reviewed together after

a lecture. Partner involvement was proposed to facilitate deeper processing through

conceived social responsibility, and resulted in more elaborative (drawn from outside of

Texas Tech University, Eevin Jennings, August 2018

107

the lecture) revisions. While note revision in itself is active, the addition of peers seems

to initiate (or provide) elaboration beyond the content, a form of constructive

engagement. Whether the addition of elaborative revisions were due to co-creation, or

inclusion of their partners’ constructive elaborations, was not investigated, and thus

cannot inform whether interactive engagement took place. Regardless, peer involvement

in notetaking has the potential to alter processing strategies to enhance meaningful

learning.

Although video lectures are designed to be viewable from locations other than

classrooms (Copley, 2007; Lyons et al., 2012), instructors frequently assign learners to

work together (either electronically or in-person) on various projects (Comer et al., 2014;

So & Brush, 2008). Despite the many advantages in the collaborative learning literature

(Cranney et al., 2009; So & Brush, 2008), there are three primary issues encountered

when considering peer involvement in learning from video lectures.

The first issue is result inconsistency. One approach employing peer involvement

to enhance learning is the teaching expectancy manipulation. While several studies have

found that expecting to teach increases learning and memory relative to expecting a test

(Bargh & Schul, 1980; Benware & Deci, 1984; Daou, Lohse, & Miller, 2016; Ehly,

Keith, & Bratton, 1987; Renkl, 1995), other studies have failed to find a teaching

expectancy effect (Ehly et al., 1987; Ross & Di Vesta, 1976). Some research suggests

that outcomes are dependent on whether students actually end up teaching (Fiorella &

Mayer, 2013, 2015a).

Texas Tech University, Eevin Jennings, August 2018

108

A second issue with peer-involved learning stems from individual differences. A

partner’s efficacy necessitates numerous considerations, some of which include the

partner’s intrinsic motivation (Benware & Deci, 1984), self-efficacy (Sung & Mayer,

2012), perceived independence and authoritative role (Bruffee, 1999), social loafing and

competence (Meyer, Schermuly, & Kauffeld, 2016), background knowledge (Fiorella &

Mayer, 2015b), and social identity (Eddy, Brownell, Thummaphan, Lan, & Wenderoth,

2015). Further, partners are prone to adopt a “knowledge-telling” (rather than knowledge-

building) approach to partnership work (Barron, 2003; Chi & Menekse, 2015; Roscoe &

Chi, 2007). This was a proposed explanation for why there was no main effect of

“partners” in note revision in Luo et al. (2016)’s experiments (although the partner X

interpolation interaction was significant). In short, at the very least, actual partner

involvement would benefit from some form of training and/or partner matching (Fiorella

& Mayer, 2015a, 2015b; Luo et al., 2016).

Finally, a crippling determinant in peer involvement is the role of anxiety.

Specifically, a considerable number of studies have shown that a teaching expectancy

improves learning relative to test expectancy, but profits are negated by the anxiety of

expecting to teach (Ameen, Guffey, & Jackson, 2002; Renkl, 1995; Ross & Di Vesta,

1976).

In sum, peer involvement seems to invoke adaptive, active/constructive

knowledge acquisition, but may be overwritten by extraneous variables when peers are

actually involved. Indeed, perceived degrees of social presence in online courses increase

motivation, reduce drop-out, and increase both satisfaction and overall performance (Ke,

Texas Tech University, Eevin Jennings, August 2018

109

2010; Lyons et al., 2012; So & Brush, 2008) while still allowing users to maintain their

independence. Therefore, latent advantages of perceived social involvement should not

be omitted, but rather, cultivated. In this sense, some of the advantages of peer

collaboration were operationalized in combination with another critical manipulation,

known as the pause procedure.

Spaced Lectures

The concept of breaking a lecture up into sections is not new. Literature stemming

from distributed practice supports the notion that learning complex, related ideas is better

when performed as a distinguished sequence, rather than in a “massed” fashion (Clark &

Mayer, 2010; Lusk et al., 2009). The fact that generally, lectures are organized in a

hierarchical nature, and also that most students have substantial trouble differentiating

between main ideas and their counterparts in continuous lectures (Lebauer, 1984; Olsen

& Huckin, 1990), furthers the necessity of this investigation.

The pause procedure (also referred to as the pause principle, spaced or segmented

lecture) was introduced in the 1980s as one way to test the hypothesis that A) different

learning activities could improve memory relative to restudying, and B) that the benefits

of active learning could be optimized based on when they occurred either before, during,

or after lecture (Di Vesta & Smith, 1979). In Di Vesta & Smith’s study, participants

completed individual mental review, peer discussion, and/or unrelated distractor tasks

such as puzzle completion. The peer discussion conditions consistently produced the

highest recall and retention when conducted periodically throughout the lecture. This

effect has been replicated numerous times over the last several decades, and again

Texas Tech University, Eevin Jennings, August 2018

110

recently as a useful intervention for enhancing lecture memory (Bachhel & Thaman,

2014; Di Vesta & Smith, 1979; Rowe, 1980, 1986; Ruhl et al., 1990; Ruhl et al., 1987;

Ruhl & Suritsky, 1995; Sitler, 1997).

One shortcoming is that very few studies have directly compared the spaced

lecture with a traditional, continuous one. Di Vesta and Smith (1979) found that peer

discussion was more effective for memory than “individual review” (mentally reflecting

over the material) when it was interspersed throughout a lecture. Interspersed puzzle

completion encumbered memory, likely because it acted as a mode of interference since

it was unrelated to the task (Wixted, 2004; Wixted & Rohrer, 1993). Since immediate

reinforcement of new information enhances retention (Di Vesta & Gray, 1973; Di Vesta

& Smith, 1979; Hebb, 1966; Howe, 1970), in contrast to some researchers claiming that it

should obstruct it (Hintzman, Block, & Summers, 1973), participants in the interspersed

discussion conditions had more opportunity to reorganize, clarify, and deeply process the

material.

An important and unusual note is that peer discussion encumbered memory

compared to puzzle completion and reflection when it occurred after lecture. This is

contrary to the evidence that active learning of new information enhances memory.

Interestingly, Di Vesta and Smith (1979) did not address possible explanations for this

result. The outcome could simply extend from motivational reductions that often

accompany the end of continuous lectures (i.e., mental fatigue). Similarly, the type of

active processing involved in peer discussion may not be aptly facilitative when

performed after a lecture. It is possible that participants who discussed ideas afterward

Texas Tech University, Eevin Jennings, August 2018

111

may have encountered some of the lecture constraints (proactive interference, exhausted

processing capacity, and mind-wandering), which therefore reduced information

available for discussion afterward and subsequently impoverished retrieval. This

ambiguous finding lends the question of whether another form of active learning may be

more robust to interference during or after lecture, such as testing or note revision.

Self-testing

Test administration is an established method to measure student knowledge

(Angelo & Cross, 1993; Dempster, 1996; Roediger & Butler, 2011). Testing has also

become a valid learning tool in class (McDaniel, Anderson, Derbish, & Morrisette, 2007;

Roediger & Butler, 2011; Henry L Roediger III & Karpicke, 2006a, 2006b). Rather than

peer discussion, which is unlikely to be of use during webinar video lectures, is the

implementation of self-testing or retrieval practice. Specifically, studied information that

is tested will be better remembered long-term than information that is instead restudied

(Henry L Roediger III, Agarwal, et al., 2011; Henry L Roediger III & Karpicke, 2006a).

Testing is hypothesized to benefit memory directly and indirectly. Direct effects

are derived from the intrinsic nature of testing. Retrieval requires specific cognitive

activities, such as facilitative processing (Arnold & McDermott, 2013; Rowland, 2014;

Tulving, 1967) and memory trace strengthening through incremental practice and test

difficulty (Bjork & Bjork, 2011; Gardiner et al., 1973; Masson & McDaniel, 1981; Henry

L Roediger III & Karpicke, 2006a; Tyler et al., 1979; Wheeler, Ewers, & Buonanno,

2003).

Texas Tech University, Eevin Jennings, August 2018

112

Comparatively, indirect effects of testing suggest that retrieval both requires and

results in cognitive, metacognitive, and strategic processes in subsequent study or recall

sessions, such as increases in metacognitive awareness and control (A. C. Butler,

Karpicke, & Roediger III, 2008; Pilegard & Mayer, 2015; Szpunar et al., 2014; Thomas

& Mcdaniel, 2007; Winne & Hadwin, 1998), task-relevant behaviors (Jing et al., 2016;

Schacter & Szpunar, 2015), and enhanced organizational skills or elaborative associations

(Agarwal, Karpicke, Kang, Roediger, & McDermott, 2008; Carpenter, 2009, 2011;

McDaniel, Roediger, et al., 2007; Pyc & Rawson, 2010; Zaromb & Roediger, 2010). Just

the expectation of an upcoming test enhances memory (Fitch et al., 1951; Szpunar et al.,

2007; Weinstein et al., 2014).

For these reasons, the testing effect tends to be robust to many factors, such as test

type (i.e., short answer, recognition, free recall), delay between initial learning trials and

final assessment (Congleton & Rajaram, 2012; Toppino & Cohen, 2009), and material

type (i.e., paired-associates, prose, multimedia, lectures, and map learning) (Allen,

Mahler, & Estes, 1969; Carpenter & Pashler, 2007; Coppens, Verkoeijen, & Rikers,

2011; Johnson & Mayer, 2009; McDaniel, Anderson, et al., 2007; Henry L Roediger III

& Karpicke, 2006a; Szpunar, Khan, et al., 2013).

What types of tests should be used? In conjunction with ICAP’s proposition

toward cognitive engagement, the difficulty-of-retrieval hypothesis of testing asserts that

direct effects are best observed when the retrieval attempts are demanding (Pyc &

Rawson, 2009). Indeed, many studies have established that the more difficult the retrieval

attempt, the stronger effect on memory (Zaromb & Roediger, 2010). Specifically,

Texas Tech University, Eevin Jennings, August 2018

113

participants who practice short-answer retrieval outperform those who use multiple-

choice (Foos & Fisher, 1988; Kang, McDermott, & Roediger III, 2007), and

subsequently, those who engage in free recall outperform all other retrieval modes.

What is the explanation for the testing effect? There are varying schools of

thought in this regard. Most explanations converge on the idea that any activity that

forces the learner to engage with the material will promote deeper processing than re-

exposure (Bellezza, Cheesman, & Reddy, 1977; Congleton & Rajaram, 2012; Gardiner et

al., 1973; Reddy & Bellezza, 1983). This effect is increased when the learner utilizes a

meaningful strategy, assigning multiple cues and contexts to the content (Masson &

McDaniel, 1981; R. E. Mayer, 2002; Tyler et al., 1979; Wheeler et al., 2003). Further,

some researchers claim that retrieval mandates facilitative, organizational, elaborative,

relational, generative, and constructive processing, such that a successful attempt

establishes and strengthens accessibility routes within semantic networks (Arnold &

McDermott, 2013; Blunt & Karpicke, 2014; Carpenter, 2009, 2011; Carpenter & DeLosh,

2006; Congleton & Rajaram, 2012; Eglington & Kang, 2016; Karpicke & Blunt, 2011;

Lehman et al., 2014; Thomson & Tulving, 1970).

Temporal placement of testing in classes can also improve learning. Priming

students for the upcoming session by starting class with a quiz (over the previous lecture

or the day’s assigned reading) (Bertou, Clasen, & Lambert, 1972; Leeming, 2002;

Narloch et al., 2006) or ending the class with brief tests over key components (A. C.

Butler & Roediger III, 2007; Johnson & Mayer, 2009; Lyle & Crawford, 2011) boosts

retention compared to students who are not tested or study instead. This has held true for

Texas Tech University, Eevin Jennings, August 2018

114

both high-stakes (integrated into students’ course grades) (Cranney et al., 2009; Fitch et

al., 1951; Leeming, 2002; McDaniel, Anderson, et al., 2007; McDaniel, Roediger, et al.,

2007) and low-stakes (extra credit) (Padilla-Walker, 2006) applied scenarios.

Many studies have examined pre- and post-lecture tests, but interspersing them

throughout a lecture is also beneficial. To further the notion that segmentation-based

presentation is more effective for learning, and also that following each segment with an

activity is better than restudying, the next section highlights the utility of testing during

learning from video lectures.

Interpolated Testing

Combined with lecture-imposed learning constraints and situations in which peer-

based active learning may not be the best or a possible option (such online courses, peers

who may create misinformation, large classes, and lectures that require extensive class

time to deliver adequate volumes of material), test-based interventions may act similarly

to the benefits observed from the pause procedure. To this point, administration of tests

periodically during the lecture is a method known as interpolated testing. Short writing

assignments (A. Butler et al., 2001), quizzes (McAndrew, 1983; Henry L Roediger III,

Agarwal, et al., 2011), and clicker responses (Bunce et al., 2010; R. E. Mayer et al.,

2009) have become popular and have resulted in positive outcomes, such as increased

exam grades (Jing et al., 2016; Narloch et al., 2006; Padilla-Walker, 2006; Henry L

Roediger III, Agarwal, et al., 2011; Szpunar et al., 2007; Weinstein et al., 2014).

Proactive interference. The emergence of interpolated testing stems from

research asserting retrieval during learning enhances subsequent retention. Studies

Texas Tech University, Eevin Jennings, August 2018

115

incorporating free recall periodically throughout learning trials, such as reading prose or

video lectures, showed marked memory advantages compared to those who restudied

during those intervals (Wissman et al., 2011). The idea of retrieval-induced facilitation

poses that the direct and indirect effects of retrieval motivate consolidation and trace

strengthening when conducted between learning segments (Chan et al., 2006; Cranney et

al., 2009). This then interacts with contextual cues and episodic memory contributions to

create independent, separate learning “segments” (Lehman et al., 2014). Ultimately,

interpolated testing can reduce proactive interference by strengthening and integrating

learned information before acquiring additional concepts (Jing et al., 2016; Wahlheim,

2015).

Integration. A complement to the interpolated testing results was a measure of

conceptual integration, or the degree to which learners integrated concepts within a

corpus of information (Wahlheim, 2015). Interpolated testing increased rates of segment

“clustering” in final recall compared to participants who completed unrelated distractor

tasks (Szpunar et al., 2008) or studied their notes (Jing et al., 2016; Szpunar et al., 2014).

Rather than interrupt relational processing across lecture (Peterson & Mulligan, 2012,

2013), interpolated testing increased integrational processing both within lecture

segments as well as across them. Integration was measured in two ways: for free recall,

instances in which participants included a direct reference to another portion of the

lecture, and for cued recall, by the amount of relevant elaboration generated when

presented with a lecture slide and asked to expound on how it related to other parts of the

lecture (Jing et al., 2016; Wahlheim, 2015). Furthering this notion, Szpunar et al. (2008)

Texas Tech University, Eevin Jennings, August 2018

116

concluded that reduced proactive interference and enhanced subsequent encoding could

be explained by reductions in cue overload. Essentially, interpolated testing “confines”

each segment into its own event, which can then be representationally organized to

reduce the functional search set and aid retrieval (Watkins & Watkins, 1975).

Mind-wandering. In earlier experiments, interpolated testing significantly reduced

mind-wandering compared to interpolated reflection or study sessions (Jing et al., 2016;

Szpunar et al., 2014; Szpunar, Khan, et al., 2013). Jing et al. (2016) probed all

participants throughout a 40-minute lecture with questions assessing A) whether they

were mind-wandering (Experiment 1), and B) the topic over which they were mind-

wandering (Experiment 2). Experiment 1 revealed that both tested and non-tested

participants mind-wandered at equal rates, but also that the tested group recalled

significantly more information on the final test. Experiment 2 showed that mind-

wandering differed qualitatively. Participants in the re-study conditions reported higher

instances of lecture-unrelated thoughts, whereas the tested participants’ thoughts were

oriented towards the lecture itself. These results suggested that the inherent properties of

testing enhanced attention to the lecture, which drove higher performance on final cued

and free recall tests.

Notetaking. Interpolated testing produced significantly higher rates of notetaking,

which was assumed to be a product of the indirect effects of testing. Specifically, higher

instances of notetaking (as measured by cases of annotations made by hand to printed

versions of the PowerPoint and fewer reports of task-unrelated mind-wandering) were

indicative of sustained attention to lecture compared to lower note quantity produced in

Texas Tech University, Eevin Jennings, August 2018

117

the re-exposure (exposure to questions), and non-interpolated (distractor) conditions (Jing

et al., 2016; Schacter & Szpunar, 2015; Szpunar, Khan, et al., 2013; Szpunar et al., 2007;

Szpunar et al., 2008; Szpunar, Moulton, et al., 2013; Weinstein et al., 2014). To the

extent that notetaking is considered in the interpolation literature, interpolated testing

appears to promote attention, reduce mind-wandering, and reduce proactive interference,

which is further supported by the higher rate of notetaking. This claim is substantiated

through other demonstrations where cognitive load manipulations result in decreased note

quantity and poorer recall, and notetaking factors are correlated with performance

(Armbruster, 2000; Beck, 2015; Bretzing & Kulhavy, 1979; Carter & Van Matre, 1975;

Di Vesta & Gray, 1972; Fisher & Harris, 1973; Hartley & Davies, 1978; Kiewra, Dubois,

et al., 1991; Rickards & Friedman, 1978; R. L. Williams & Eggert, 2002).

Research Questions

The Engagement Mode of Interpolated Testing

When learning requires retention, comprehension, and integration of concepts

across a dynamic set, such as an expository text or a lecture, the effects of testing become

fickle. For example, spaced practice using retrieval results in better retention after a delay

compared to restudying (Henry L Roediger III & Karpicke, 2006b), but how well would

interpolated testing stand up after a delay? In two separate studies (Wissman & Rawson,

2015; Wissman et al., 2011), expository texts interspersed with retrieval resulted in

enhanced retention for tested compared to restudy groups (known in the reading literature

as the interim-test effect). However, after a 20-minute delay, there were no performance

differences between the tested and non-tested groups. Wissman and Rawson (2015)

Texas Tech University, Eevin Jennings, August 2018

118

adequately asserted that while interim tests can enhance retention immediately, the long-

term effects differ from those established in regular spaced practice and testing effect

designs, and may in turn yield “fragile and mysterious” effects. Since a principal goal of

educators is to increase lecture learning through meaningful processing, engagement

modes that support long-term semantic change should be prioritized. In essence, there

exists a discrepancy between the literature on interpolated testing: whereas significant

effects are encountered immediately after a lecture, long-term results may still fall victim

to the mercy of interference. This susceptibility warrants further investigation into the

causal mechanisms behind retrieval during lecture learning.

There are two key criticisms in labeling self-testing as “generative.” The first

criticism lies in the assessment methods. Retrieval can certainly strengthen memory

traces, but whether those traces are assembled into a semantic relationship only be

assumed when free recall is the criterion test. A majority of testing effect studies neglect

true assessment for active or constructive learning and transfer, and instead rely on

identical questions from the self-testing trials, surface-based (easy) questions, free-recall,

or poor representations of transfer. For example, McDaniel, Roediger, et al. (2007)

portrayed transfer by testing identical pieces of information in different formats on the

criterion test (“The ____ axon” during learning trials, followed by “The ganglion ____”

in the criterion test). Since free recall assesses, by definition, nothing beyond

participants’ capacity to reproduce a stored fact in an identical context, the ICAP

taxonomy would technically categorize this as passive engagement. The learners are not

required to integrate or manipulate the information in any way; thus, basic retrieval can

Texas Tech University, Eevin Jennings, August 2018

119

be observed, but only serves as an observation of what is stored and accessible at that

moment.

Indeed, Tran et al. (2015) failed to demonstrate a testing effect when analyzing

higher-level comprehension. Rather, across several experiments, tested participants

consistently recalled more facts than those who restudied, but the opposite was observed

when the criterion task required synthesizing (integrating) the information. This was

suspected to be due to the type of processing mandated during self-testing (i.e., item-

specific and passive), whereas those in the restudy condition were able to link concepts

together. A recent study specifically concluded that self-testing “yields potent, but

piecewise, fact learning” (Pan et al., 2016). This result is not new, but receives

inexplicably scarce attention (Agarwal, 2011; Roelle & Berthold, 2017). In this sense,

self-testing seems to stand as a complementary quantitative, rather than qualitative,

measure of knowledge. Therefore, in order to assess whether self-testing maybe has some

amount of covert active or constructive processing, it is important to involve

corresponding assessment methods, such as integration-based questions and transfer.

The second issue with this claim is that the materials used to qualify self-testing

as a “generative,” higher-education-oriented learning activity employ items that are

unrepresentative of the interconnectivity, hierarchy, and complexity of actual educational

materials (Wooldridge et al., 2014). In a review promoting the testing effect’s efficacy in

transfer application, Carpenter (2012) cited studies that utilized word pairs (Carpenter et

al., 2006), mathematical functions (Kang, McDaniel, & Pashler, 2011), spatial memory

(Carpenter & Kelly, 2012; Rohrer, Taylor, & Sholar, 2010), prose text (Karpicke &

Texas Tech University, Eevin Jennings, August 2018

120

Blunt, 2011), picture drawing (Schmeck et al., 2014) and simple, short texts (Henry L

Roediger III & Karpicke, 2006a, 2006b). Since some of the primary boundary effects of

self-testing are material complexity, length, and elementary relatedness (Rowland, 2014;

Sweller, 2010; Van Gog & Sweller, 2015), it is integral to examine more applied lecture

scenarios, such as college-level science materials in a video lecture.

Osborne (2013) overtly recognized the discrepancy between the goals of science

education and the general outcomes observed. In a series of observations, Osborne states

that the crux of current society is that we teach facts, and we TELL these facts. Rather,

we should be focusing on teaching not only facts but how the facts relate to one another.

Recall is a popular assessment method, and now also beginning to become a popular

learning method as well, but does it achieve the desired outcomes of comprehension?

Osborne states:

… An understanding of the overarching conceptual coherence and the

nature of the discipline itself only emerges for those who complete undergraduate,

if not graduate education... To the novice lacking any overview science can too

often appear to be a ‘miscellany of facts’ akin to being on a train with blacked out

windows where only the train driver knows where you are going. (p. 266)

This is troubling in the sense that little to no connection to real-world application

is absorbed, especially when students’ priorities are to simply memorize isolated facts

(Weiss, Pasley, Smith, Banilower, & Heck, 2003). It is hardly surprising to see such

prevalence when, in an observation of scientific teaching, Weiss et al. (2003) concluded

that just 14% of observed classes incorporated activities to invite critical thought and

analysis. Self-testing, in its nature simply the retrieval of learned information, doesn’t go

beyond the scope of memorization, which is the current issue. Students need to

Texas Tech University, Eevin Jennings, August 2018

121

differentiate between concepts semantically, to identify how ideas relate and whether

those conclusions are valid. This capability does not seem to be readily available under

the constraints of retrieval. Therefore, an examination based on engagement mode, proper

assessment, and comparison with different activities is important to further the boundary

conditions of interpolated testing.

Notetaking Assessment

Jing et al. (2016) included a notetaking measure in two experiments and found

that interpolated testing produced significantly higher rates of notetaking, which was

assumed to be a product of the indirect effects of testing. Specifically, higher instances of

notetaking (as measured by cases of annotations made by hand to printed versions of the

Power Point) were indicative of attention to lecture. Indeed, other studies have shown

that cognitive load manipulations result in decreased note quantity and poorer recall

(Aiken et al., 1975). Therefore, there are four critical points to consider when judging the

role of notetaking in the studies on interpolated testing.

The first question addresses whether Power Point annotations may affect the

encoding benefit of notetaking. Notetaking, regardless of strategy or medium, is

cognitively demanding (Piolat et al., 2005), and for decades was considered only

effective for retaining material post-lecture combined with a “summarization” strategy

(Bretzing & Kulhavy, 1979). Subsequently, note quality is a significant predictor of test

performance and is highly correlated with note quantity (Bui et al., 2013; Peverly et al.,

2013), which was replicated using Power Point annotations in Experiment 1 of Jing et al.

Texas Tech University, Eevin Jennings, August 2018

122

(2016)’s study. However, no differences in notetaking were found for Experiment 2

under almost identical conditions.

Although Power Points are used in lectures frequently (Marsh & Sink, 2010),

several studies have confirmed that Power Points change the way in which students

process lecture material (Haynes, McCarley, & Williams, 2015; Marsh & Sink, 2010; J.

L. Williams et al., 2016). Further, the literature on Power Point handouts is mixed and

depends on the handout’s degree of completeness. In some cases, handouts increase

students’ perceived learning, but don’t affect actual performance (Susskind, 2008). When

combined with naturally higher levels of overconfidence in video lecture learning

(Szpunar et al., 2014), the role of Power Point handouts raises doubts. This raises the

question of whether Power Point annotations are yet appropriate notetaking methods to

utilize in interpolated testing, especially since most students take notes from scratch (Kay

& Lauricella, 2014) and B) students may not receive slides from their instructors in all

classes (Brazeau, 2006). Potentially, the benefits of interpolated testing could be

enhanced with a more representative notetaking method, especially since students are

invited to engage in active or constructive engagement through transcription.

A second caveat this respect is that testing consistently produced higher test

scores than conditions that were allowed to study their notes between lecture segments,

which is harmonious with theories of the testing effect, but challenges the efficacy of

notetaking (Rickards & Friedman, 1978). However, long-term benefits of note factors

may not manifest until after a delay. The interpolated testing studies all used same-day

designs, where the learning and final test trials were separated by several minutes of

Texas Tech University, Eevin Jennings, August 2018

123

distractor tasks, which most likely interfered with consolidation. Additionally, most

testing effect literature (both applied and laboratory) doesn’t address or control the

impact of notetaking on memory.

Third, the literature on notetaking uses a continuous lecture format, except for a

handful of older “spaced lecture” studies in which participants took notes or studied

during the pauses (Aiken et al., 1975; Di Vesta & Gray, 1973). No recent studies have

investigated how interpolated testing combined with handwritten notetaking may yield

different outcomes compared to traditional, continuous lecture notetaking during a

webinar lecture.

Note Revision

Few argue against the benefit of notetaking for course performance. There is also

merit in note revision during learning (recently termed the “missing link” of notetaking

literature) (Luo et al., 2016). The advantages from note revision are thought to occur due

to potential retrieval and generative processes from adding ideas not originally included

in the original notes (R. L. Williams & Eggert, 2002).

Based on ICAP taxonomy, the addition of new information to what was already

present in the notes characterizes note revision as active (if additions consist of

information from the lecture) or constructive (if additions consist of information from

outside of the lecture). Of course, whether learners will use this tool to their advantage is

its own question, so this label is justified under the assumption that learners are indeed

engaging in integrative or generative processes.

Texas Tech University, Eevin Jennings, August 2018

124

To illustrate, Luo et al. (2016) conducted two experiments on the efficacy of note

revision. In the first experiment, instructions assigned participants to either re-write or

revise their notes after a lecture. In experiment 2, participants were told to re-write or

revise either during pauses throughout the lecture, or after a lecture (which was

conducted with or without partners as well and is addressed in the next section). Revisers

were told to add any information that could have been missed during the lecture and

anything else that could help them learn the material. In experiment 1, there was only a

modest effect for note revision compared to re-writing notes. However, in experiment 2,

the effect of revision on number of notes, additional notes added during revision, and

performance was amplified when revision occurred frequently throughout the lecture

(i.e., in an interpolated fashion). This was especially pronounced for relational items that

required participants to integrate separately-presented ideas, which constitutes as active

engagement. In sum, notetaking and studying are both effective learning activities, but

note revision can also benefit learning and memory.

Note Revision for Others

Since notes are integral for most students’ academic success, how could students

who are unable to view the lecture accommodate a lack of encoding and external storage

benefits? Further, how might students with disabilities overcome such obstacles? In

online and face-to-face lectures, students commonly ask their peers for a copy of their

notes. Similarly, students know ahead of time that they will miss a lecture and may make

arrangements with a peer to obtain a copy of that day’s lecture notes. Students with

disabilities may request notetaking assistance, for which instructors may account by

Texas Tech University, Eevin Jennings, August 2018

125

asking for peer volunteers to take notes for those students. In most instances, the absent

or disabled student will benefit from reviewing the peer’s notes as opposed to having

nothing to review at all (Carter & Van Matre, 1975; Di Vesta & Gray, 1972). However,

from this accommodation rises a subsequent question: how does the act of notetaking for

a peer affect the notetaker?

Temporal Distribution

A phenomenon, known as the serial position effect, demonstrates that due to

various processing factors, students tend to best remember information from the

beginning or end of a lecture (Holen & Oaster, 1976; Johnston & Calhoun, 1969). For

example, Hartley and Davies (1978) had participants perform free recall immediately

after a lecture and found that 70% of students’ recall came from the first 10 minutes of

lecture, 20% came from the final 10 minutes of lecture, and only 10% from the middle 10

minutes. Although Jing et al. (2016) established that interpolated testing prevented

proactive interference, it is not known whether interpolated testing protected against

serial position effects. Since interpolated testing improves the probability that participants

will be able to engage in the lecture more than participants who study (Szpunar, Khan, et

al., 2013), the current experiment will assess whether interpolated lectures allow for

memory and comprehension throughout the middle portions of the lecture as well as the

beginning and end.

Mind-wandering probes. One question is whether the presence of probing

questions altered the re-study conditions’ attention. An established intervention to

improve metacognitive knowledge (and subsequently, memory) is through frequent self-

Texas Tech University, Eevin Jennings, August 2018

126

assessment and adjustment of knowledge and attention (King, 1989, 1991; Schraw,

1998). By recurrently probing participants for their attentional status throughout a lecture,

the results from the recent studies on mind-wandering and interpolated lectures (Jing et

al., 2016; Szpunar et al., 2014; Szpunar, Khan, et al., 2013; Szpunar, Moulton, et al.,

2013) may not adequately reveal the true relationship between interpolated testing and

the dependent variables. Specifically, the effects in the re-study groups may represent

somewhat inflated results due to the probes. It is unlikely that the testing group was

affected, since testing in itself acts as a metacognitive check and reduces overconfidence

(Schacter & Szpunar, 2015; Szpunar et al., 2014). By removing the probes in the planned

experiments here, a more significant difference may be observed across the variables,

such as recall quantity and quality, and consistent differences in note quantity when none

were found in Experiment 2 in Jing et al. (2016).

Summary

The overall scope of this study is to extend the recent literature regarding the

effects of tests administered during lecture. First, no recent research has directly

compared interpolated testing to tests administered after a lecture, and the findings from

the few studies conducted in the past have inconclusive results since testing wasn’t

directly used as a manipulation. Second, although the studies on interpolated testing have

found that testing can affect note quantity, none have incorporated the influence of

handwritten notetaking, the primary notetaking method for most university students.

Dependent variables that have not been addressed, but are important for understanding

Texas Tech University, Eevin Jennings, August 2018

127

students’ conceptualization of lecture, are the hierarchical structure and temporal

distribution of the lecture material among notetaking, memory, and integration outcomes.

Texas Tech University, Eevin Jennings, August 2018

128

APPENDIX B

LECTURE TRANSCRIPT AND CODING SCHEME

M = Main idea, ID = Important Detail, LID = Less-Important Detail, NA = Unclassified

Sentences italicized = verbatim lecture transcript

Example:

Also, while an individual language of course is learned, the ability to recognize the

individual sounds in any language (which we call phonemes) is present in humans at

birth (LECTURE TRANSCRIPT)

(IDEA UNITS, SEGMENT, CLASSIFICATION, IDEA UNIT NUMBER)

-(even though a) Language is learned, (1, LID, 19)

-we are able to recognize individual sounds in any language at birth (1, ID,

20)

-individual sounds are called phonemes (1, ID, 21)

Master Code List:

Hello and welcome back again to our course on the brain.

-This is a course on the brain (1, M, 1)

Here we’re going to begin the third segment of our course, where we’re going to discuss

a number of higher order cognitive functions, like the subject for our lecture today:

language.

-This course talks about the higher order cognitive function that is language

(1, M, 2)

Our goal in this lecture will be to review the evidence that very specific areas of the brain

are going to play a role in both spoken and written language.

-This lecture is going to review the parts of the brain that play a role in

spoken and written language. (1, NA, 3)

Now, language involves higher order sensory areas and higher order motor areas, and

that should make sense to you.

-Language involves higher order sensory and motor areas. (1, ID, 4)

For example: auditory areas are involved in the ability to interpret spoken language as

meaningful, so this is going to be a function of a higher order sensory area.

Texas Tech University, Eevin Jennings, August 2018

129

-Auditory areas are involved in the ability to interpret spoken language as

meaningful, (1, ID, 5)

-Auditory areas are a part of a higher order sensory area. (1, LID, 6)

Motor areas are going to be involved in the ability to produce the specific combination of

sounds that compose a given language, and which are meaningful to any native speaker.

-Motor areas are involved in the ability to produce the sounds that compose a

language. (1, ID, 7)

-Sounds of language are meaningful to a native speaker. (1, LID, 8)

So this means by definition that higher order sensory and motor areas are going to be

involved.

-Higher order sensory and motor areas are going to be involved in producing

sounds. (1, ID, 9)

We don’t just make sound, and we don’t just listen to noise.

-We don’t just make sound (1, LID, 10)

-and we don’t just listen to noise (1, LID, 11)

Language involves communication between one person and another.

-Language involves communication between two people. (1, NA, 12)

Now, our species appears to be unique in our ability to communicate symbolically

through language. Other animals may communicate in very subtle ways, but we’re the

only species that actually communicates symbolically.

-Other animals communicate, (1, LID, 13)

-we’re the only species that communicates symbolically. (1, NA, 14)

Language is believed to be instinctual in our species, an instinct.

-Language is an instinct for humans. (1, ID, 15)

And what are some of the reasons neurobiologists and linguists believe this?

-There are reasons that neurobiologists and linguists believe this. (1, NA, 16)

Well, skeletal specializations have been identified in our earliest hominid ancestors that

allow for speech.

-Our ancestors had skeletons that allowed for speech. (1, ID, 17)

This suggests that language arose at the dawn of our evolution.

-Language arose a long time ago. (1, NA, 18)

Texas Tech University, Eevin Jennings, August 2018

130

Also, while an individual language of course is learned, the ability to recognize the

individual sounds in any language (which we call phonemes) is present in humans at

birth.

-(even though a) Language is learned, (1, LID, 19)

-we are able to recognize individual sounds in any language at birth. (1, ID,

20)

-individual sounds are called phonemes. (1, ID, 21)

So any baby at birth can hear all of the sounds that are made in any language spoken on

the planet Earth, but the reason you speak Japanese if you’re born in Japan is because

you’re exposed to a subset of sounds that make up the Japanese language.

-At birth, we can hear all of the sounds in any language. (1, NA, 22)

-If you’re exposed to Japanese sounds, you will speak Japanese. (1, NA, 23)

If you speak English, you’re subjected to those sounds.

-If you speak English, you’re subjected to those sounds. (1, LID, 24)

So an individual language is learned, but human infants have the ability to hear (to make

the distinction between) phonemic sounds in all languages. Lastly, the left hemisphere

shows specialization before birth in the language areas we’re going to talk about.

-The left hemisphere shows specialization before birth (1, NA, 25)

-(in certain areas). (1, NA, 26)

So we believe that all of these things indicate that language is instinctual in our species.

Now, language is composed of a number of different elements.

-Language is made up of many different things. (1, NA, 27)

One of the things I find fascinating is that there are approximately 6,000 distinct,

individual languages spoken on the planet Earth.

-There are 6,000 different languages on Earth. (1, NA, 28)

What I find even more amazing is that about 1,000 of these languages are spoken in New

Guinea.

-1,000 of these are spoken in New Guinea. (1, LID, 29)

These are independent, separate languages. These are all the languages spoken on Earth,

1,000 of them in New Guinea. Now, each language consists of different phonemic sounds

(or individual sounds).

-Each language consists of different phonemic sounds. (1, ID, 30)

And so English for example consists of about 50 distinct phonemes (or phonemic sounds).

Texas Tech University, Eevin Jennings, August 2018

131

-English consists of about 50 phonemes. (1, ID, 31)

For example, /b/ and /c/

-/b/ and /c/ are phonemes. (1, LID, 32)

These are not letters, these are sounds: -/b/ and /c/’

-/b/ and /c/ are sounds, not letters. (1, LID, 33)

So you think about words like bat, cat, and notice that those different consonant sounds

convey the difference between those two words.

-Different consonant sounds convey the difference between two words (1, ID,

34)

-like bat and cat (1, LID, 35)

And that’s very important that one sound conveys the difference between two animals.

Very, very different. Now, do you remember when we discussed that in the auditory

system that as we age we lose the ability to hear higher frequency sounds?

-As we age, we aren’t able to hear high frequency sounds. (1, NA, 36)

Well that’s unfortunate, because what we fail to be able to hear as we get older are

specifically the consonant sounds.

-(As we age) We lose the ability to hear consonant sounds. (1, ID, 37)

And so that’s why, as we get older, we have trouble understanding the words and songs

and music when we listen to the television set, because we’re not hearing the phonemic

sounds that begin a word that give us a clue as to what the word means.

-That’s why we have trouble understanding words, songs, and music when

we listen to the TV (1, NA, 38)

-We don’t hear the phonemic sounds that begin a word. (1, ID, 39)

So this is most unfortunate.

Now, morphemes are the simplest arrangement of phonemes into a meaningful group.

-Morphemes are the simplest arrangement of phonemes into a group (2, M,

40)

So for example, a syllable is a morpheme.

-A syllable is a morpheme. (2, ID, 41)

And simple words in a language are different and distinguished from each other by

phonemes and morphemes, and they convey different meanings.

-Words are different because of phonemes and morphemes (2, ID, 42)

Texas Tech University, Eevin Jennings, August 2018

132

-They convey different meanings. (2, LID, 43)

And it’s meaning that matters to us.

-Meaning matters to us (2, LID, 44)

That’s what language is about. It’s about conveying meaning.

-Language is about conveying meaning (2, ID, 45)

Now words, in turn, make up sentences, and sentences are nothing more than temporal

strings of words that have meaning.

-Words make up sentences. (2, NA, 46)

-Sentences are strings of words with meaning. (2, NA, 47)

But the meaning in this case isn’t just due to the individual words, but meaning is also

conveyed by grammar and syntax.

-But meaning isn’t because of individual words. (2, LID, 48)

-Meaning is conveyed by grammar and syntax. (2, ID, 49)

So, for example, all languages, each individual language, has a specific word order.

-All languages have a word order. (2, NA, 50)

So they have a place in the sentence where the subject, verb, and the object will go.

-The subject, verb, and object go in certain places in the sentence. (2, ID, 51)

In English, that’s how it is. It’s subject, verb, and object.

-In English, it’s subject, verb, and object. (2, NA, 52)

The order of words in an English sentence conveys meaning.

-The order of words conveys meaning. (2, ID, 53)

So for example, in English: “The boy looked at the girl,” and “The girl looked at the

boy,” mean two different things.

-in English: “The boy looked at the girl,” and “The girl looked at the boy,”

mean two different things (2, LID, 54)

And all of the words are the same. Just the order has been changed.

-The order has been changed. (2, LID, 55)

And each individual language has its own word order. Language areas are found in the

left cerebral hemisphere.

-Language areas are in the left hemisphere. (2, M, 56)

Texas Tech University, Eevin Jennings, August 2018

133

We’ve talked about this before, that the left hemisphere is dominant, and the left

hemisphere is generally dominant whether you’re right-handed or left-handed.

-The left hemisphere is dominant for language. (2, ID, 57)

-The left hemisphere is dominant even if you are left-handed. (2, ID, 58)

So we normally call it the dominant hemisphere, but that is almost always the left

hemisphere. These language areas play a critical role in our ability to speak and

understand language.

-These areas are important for us to speak and understand language. (2, ID,

59)

There were two physicians, Paul Broca and Karl Wernicke, who were among the very

first to describe patients that had specific disorders of language.

-Two physicians were the first to discover language disorders. (2, NA, 60)

-(Paul Broca and Karl Wernicke) (2, LID, 61)

An aphasia is an acquired disorder of language.

-An aphasia is an acquired disorder of language. (2, ID, 62)

It means that the individual could speak or understand language perfectly fine, and then

had a stroke or some other kind of damage, and now has some kind of problem related to

language.

-It means that a normal person had a stroke or some kind of damage but now

has a problem with language. (2, ID, 63)

And this distinguishes it from something like dyslexia and other types of disorders.

-This is different from dyslexia and other disorders. (2, LID, 64)

So aphasia is specifically an acquired disorder of language. And it needs to be contrasted

with something else because it’s easy to get these confused, but it’s a very important

distinction. Aphasia is a disorder of language. It is not about articulation.

-It (aphasia) is not about articulation (2, LID, 65)

So an individual might have difficulty articulating words because they had an injury to

their face because they have difficulty moving their tongue a particular way.

-Someone might not be able to say words because they had a face injury or

can’t move their tongue very well (2, LID, 66)

They might have some other kind of neurological problem that makes it difficult for them

to articulate words.

Texas Tech University, Eevin Jennings, August 2018

134

-Someone might have a neurological problem that makes it hard for them to

say words (2, LID, 67)

But that’s not what we’re talking about. Aphasia is a disorder of the higher order

function of language.

-But aphasia is a problem with the higher function of language (2, ID, 68)

It means specifically being able to understand your native language and to be able to

speak it normally.

-because you can understand and speak your native language. (2, ID, 69)

So it is a higher order function that is lost.

-So they lose this higher order function (2, LID, 70)

So let’s begin with one of the aphasias that was named after Paul Broca.

-One aphasia was named after Broca. (2, NA 71)

“Broca’s aphasia” is a motor aphasia, or called an expressive aphasia.

-Broca’s aphasia is a motor aphasia. (2, ID, 72)

-It is an expressive aphasia. (2, ID, 73)

And it is due to damage specifically within areas 44 and 45 in the frontal lobe in the

inferior frontal gyrus.

-It is caused by damage in areas 44 and 45 of the frontal lobe (2, ID, 74)

-It is in the inferior frontal gyrus. (2, ID, 75)

If we looked back at a drawing of the brain, this is the left hemisphere, Broca’s aphasia

results right here to the area which bears his name: Broca’s area, Brodman’s area 44

and 45.

-Broca’s aphasia occurs in Broca’s area in the left hemisphere (2, LID, 76)

-This is in Brodman’s area 44 and 45. (2, ID, 77)

An individual who has Broca’s aphasia is very hesitant about speaking.

-Someone with Broca’s aphasia is very hesitant to speak. (2, ID, 78)

It’s language they can’t speak.

-They can’t speak language. (2, NA, 79)

It’s not really a problem related to understanding, it’s not a problem in articulation.

-They can understand language. (2, LID, 80)

Texas Tech University, Eevin Jennings, August 2018

135

-They can articulate (language). (2, LID, 81)

They have trouble speaking language.

So they’re very hesitant, and certain parts of speech are missing.

-When they talk, parts of speech are missing. (2, ID, 82)

So if they want to go to the store or something they might say, “Go store.”

-If they want to go to the store they will say, “Go store.” (2, LID, 83)

And it’s very hard for them to communicate language.

-It’s very hard for them to communicate. (2, ID, 84)

Something seems to be wrong in the motor aspect of being able to speak language.

-Something is wrong with their motor ability to speak language. (2, NA, 85)

And over time, the individuals with these disorders (which involves Broca’s aphasia)

become mute. So the individual basically stops speaking.

-These people become mute. (2, ID, 86)

They are no longer able to communicate in spoken language.

Now, let’s contrast that with the other type of aphasia, which is named after the other

physician, called Wernicke.

-Another type of aphasia is called Wernicke. (3, M, 87)

Wernicke’s aphasia is a sensory or a receptive aphasia, and the disorder is due

specifically to lesions that are found in damaged area 22 of the temporal lobe.

-Wernicke’s aphasia is a sensory or receptive aphasia (3, M, 88)

-It is due to lesions in area 22 of the temporal lobe. (3, ID, 89)

So let’s look at where that is at.

Here is Wernicke’s area right here, Brodman’s area 22.

-Wernicke’s area is in Brodman’s area 22. (3, NA, 90)

Now one of the things I want you to notice: Broca’s aphasia is a motor aphasia. It’s an

expressive aphasia. Notice it’s in the frontal lobe, which is where all those motor areas

are located, right?

-Motor areas are in the frontal lobe. (3, NA, 91)

So you can remember it that way. What is down here, what primary area is down here in

the temporal lobe: the auditory area.

Texas Tech University, Eevin Jennings, August 2018

136

-The auditory area is in the temporal lobe. (3, ID, 92)

And Wernicke’s aphasia is specifically an aphasia, a receptive aphasia, in that an

individual now can’t understand language.

-A person with Wernicke’s aphasia can’t understand language. (3, ID, 93)

So this individual can speak perfectly fine, but they no longer understand what other

people say to them.

-They can speak just fine, (3, ID, 94)

It’s a very interesting kind of thing. Broca’s patients can understand what is said to them,

but they can’t speak language. The Wernicke’s patient talks non-stop, but nothing they

say makes any sense.

-Wernicke’s patients talk non-stop but don’t make any sense. (3, ID, 95)

It’s almost as though when the words come out of their mouth, they can’t understand

language, so what goes into their own ear doesn’t make any sense either.

-They don’t understand what they’re saying either. (3, LID, 96)

So for example, in a physician’s office, if an individual had a stroke involving this area

(Wernicke’s area), you might ask the individual something like, “Do you know why

you’ve been brought into the hospital today?”

-In a physician’s office, if an individual had a stroke in Wernicke’s area, (3,

LID, 97)

-You ask, “Do you know why you’ve been brought to the hospital today?” (3,

LID, 98)

And the individual might say, “The sky is blue and the dog had a pink collar on. And

furthermore, there’s a candy store down at the end of the…”

-The individual might say, “The sky is blue and the dog had a pink collar on.

And furthermore, there’s a candy store down at the end of the…” (3, LID,

99)

And they talk non-stop, but it doesn’t make any sense, and it has nothing to do with what

you said to them. You can imagine the difficulty in being a family member, and the

dynamics of the family, and how it changes with people who have these kinds of aphasias.

-It is difficult for the family for people with these aphasias. (3, LID, 100)

Now obviously, in normal individuals, we hear language spoken to us, and Wernicke’s

area is connected to Broca’s area, and that should make sense to you.

-Normal people hear spoken language. (3, LID, 101)

-Wernicke’s area is connected to Broca’s area. (3, ID, 102)

Texas Tech University, Eevin Jennings, August 2018

137

Obviously, when someone says, “Do you know why you’re brought into the hospital?” if

you make an appropriate response it means you understand what was said to you.

-If someone asks you a question and you respond normally, it means you

understood what they said. (3, LID, 103)

So the areas are connected, and there’s a different type of aphasia that occurs when that

connection between the two areas is lost.

-A different aphasia happens when the connection is lost. (3, ID, 104)

So there are many different kinds of aphasias.

-There are many different kinds of aphasias. (3, LID, 105)

One of the things I would like you, as my students, to notice here, is that do you

remember that most of the lateral aspect of the hemisphere was supplied by a single

artery, and that’s the middle cerebral artery?

-Most of the lateral part of the hemisphere is supplied by an artery. (3, ID,

106)

-This is the middle cerebral artery. (3, ID, 107)

So it turns out that individuals who have strokes that involve the major branches to this

lateral aspect of the cortex (the middle cerebral artery) can have both Broca’s and

Wernicke’s aphasia.

-Strokes in the middle cerebral artery cause Broca’s and Wernicke’s

aphasia. (3, ID, 108)

Which means they can no longer speak language, and they can no longer understand

language.

-They can’t speak or understand language. (3, ID, 109)

And this can be utterly devastating for the person and their family. Now we mentioned

previously that one of the shattered sort of paradigms in neuroscience was that language

was exclusively a left, or dominant, hemisphere function. So yes indeed, Broca’s area and

Wernicke’s area are indeed found in the left, or dominant, hemisphere.

-Broca’s area and Wernicke’s area are found in the left hemisphere (3, NA,

110)

But you know what we sort of wondered is, “What is Broca’s area in the right

hemisphere doing, or what is Wernicke’s area in the right hemisphere doing?” And what

we have discovered is that even though the left hemisphere is dominant for language, the

right hemisphere plays in fact a very critical role in language.

-The right hemisphere is also important for language (3, ID, 111)

Texas Tech University, Eevin Jennings, August 2018

138

What is the main function of language? The main function is communication.

-The main function of language is communication. (3, M, 112)

So what does the right hemisphere do? Well the right hemisphere is predominantly

involved in prosody.

-The right hemisphere is involved in prosody. (3, NA, 113)

Prosody is the intonation, and the “sing-song-y” nature of language.

-Prosody is intonation, (3, ID, 114)

-Prosody is the “sing-song-y” nature of language. (3, ID, 115)

And each language has a “sing-song-y” nature to it.

-Each language has different sing-song-y styles. (3, LID, 116)

When you hear someone speak French, you hear someone speak Italian, you hear

someone speak English, there’s different kinds of lilting intonation and rhythm to the

languages that are spoken by a normal-speaking person of that language.

-Such as French, Italian, and English. (3, ID, 117)

So prosody is very important. It’s also one of the ways we convey meaning.

-Prosody is one way we convey meaning. (3, ID, 118)

“Janette, SIT down!” “Janette, sit down.”

-“Janette, SIT down!” “Janette, sit down.” (3, LID, 119)

We convey meaning in a motive way when we use different kinds of rhythm and inflection

in our voice.

-Rhythm and inflection help us convey meaning in a motive way. (3, ID, 120)

So it is also one of the ways that we communicate.

-It is one of the ways that we communicate. (3, LID, 121)

Now, interestingly, lesions in the non-dominant hemisphere (that would be Broca’s area

in the dominant hemisphere), speak in flat tones.

-People with lesions in the non-dominant hemisphere speak flatly. (4, ID, 122)

So the individual doesn’t have the “sing-song-y” and doesn’t inflect.

-They don’t have “sing-song-y” or inflection (4, LID, 123)

The person that has the comparable area that would be Wernicke’s area (but in the right

hemisphere) doesn’t understand the emotive communication that takes place when other

people speak to them.

Texas Tech University, Eevin Jennings, August 2018

139

-Damage to Wernicke’s area in the right hemisphere causes loss of

understanding of emotive communication. (4, ID, 124)

So when someone else speaks to them with that emphasis or some kind of inflection, or in

a particular way, the person doesn’t understand that emotive element.

-When someone talks to them with inflection, they don’t understand the

emotive element. (4, ID, 125)

So in fact our right hemisphere (our non-dominant hemisphere) plays a very critical role

in communication. And that’s what language is really about. Now another paradigm that

has been sort of shattered in modern neuroscience relates to people who sign.

-Another paradigm is about people who sign. (4, M, 126)

And this is kind of interesting. I once offered a course to teach medical students how to

sign to patients who were deaf.

-I taught a course that taught medical students how to sign. (4, LID, 127)

And people have a misconception, and neurobiologists had a misconception for a long

time, and that was it’s been known for ages that the left hemisphere appears to be

dominant for language and for analytical ability.

-It’s been known for ages that (4, LID, 128)

-the left hemisphere is dominant for analytical ability. (4, ID, 129)

-There was a misconception. (4, NA, 130)

So people who are physicists tend to be very left-hemisphere dominant.

-Physicists are left-hemisphere dominant. (4, NA, 131)

Also, the right hemisphere was thought to be more involved in things like spatial

properties.

-People thought that the right hemisphere was involved in spatial properties (4, ID, 132)

And so the hemispheres were seen in that particular way. So the conclusion was reached

that individuals who use sign language, which means they use space in front of them and

move their hands, that signing had to be a right hemisphere function.

-People who use sign language use space to move their hands. (4, ID, 133)

-People thought that sign language was a right hemisphere function. (4, ID,

134)

Well it turns out that’s not so.

-Sign language is not a right hemisphere function. (4, LID, 135)

Texas Tech University, Eevin Jennings, August 2018

140

Language is language.

-Language is language. (4, NA, 136)

And the brain doesn’t care what medium you use to communicate or to use language.

-The brain doesn’t care how you communicate or use language (4, ID, 137)

-The left hemisphere is dominant even for people who use sign language. (4,

ID, 138)

It’s a left hemisphere dominant function. And in people who have never spoken, and who

use sign language, use the same Broca and Wernicke’s area that individuals who speak

language.

-People who sign use the same areas as people who speak. (4, ID, 139)

So what happens in these individuals? If a person who is a signer has Broca’s area

compromised, then that individual is halting in their signing, just like the person who has

Broca’s is halting.

-If someone who signs has Broca’s area compromised, that person will halt in

their signing (4, ID, 140)

like the person who has Broca’s is halting. (4, LID, 141)

If they have a receptive aphasia on the other hand, Wernicke’s area is involved, then they

can no longer understand the signs that someone else is making to them.

-If they have receptive aphasia, then they can’t understand when other

people sign to them. (4, ID, 142)

So, in our species, the left hemisphere appears to be specifically designed to use

language to do what’s necessary to allow us to communicate.

-The left hemisphere is designed to use language to do what’s important for

us to communicate. (4, ID, 143)

Now up to this point we’ve been focusing primarily on spoken language, but we of course

do have another type of communication, and that’s written language. Now, unlike spoken

language, written language is an invention, not an instinct.

-Written language is an invention, not an instinct. (4, ID, 144)

It’s an invention. We have a number of different areas, however, that have been

implicated in written language, and this is very important.

-Different areas of the brain are included in written language. (4, ID, 145)

But before we get to those areas I want to point something out to you, because I think this

is very, very important. Written languages rely on pictures to represent words, or we

have alphabets (that’s what we use now).

Texas Tech University, Eevin Jennings, August 2018

141

-Written languages use pictures to represent words. (4, ID, 146)

-We use alphabets. (4, ID, 147)

But every neurologically normal person on the face of this planet learns how to speak

language.

-Every neurologically normal person learns how to speak language. (4, ID,

148)

But not every person on the face of this planet will learn how to read or write.

-Not everyone will learn how to read or write. (4, NA, 149)

And in fact if you looked at our planet as a whole, you would see there are far more

individuals who do not read and write.

-There are many more individuals who do not read and write. (4, NA, 150)

But every single neurologically normal kid will learn how to speak language. Language

is an instinct. Spoken language is an instinct. Or if their parents signed to them, whatever

language they use, that will be an instinct. Written language is an invention.

-Spoken and signed language is an instinct, (4, ID, 151)

But written language is an invention. But there are also some other differences, which I

think are very important.

-There are some other differences. (4, LID, 152)

What we’ve learned is that in spoken language, the left hemisphere is actually designed

to abstract the set of sounds that are being spoken in that particular language.

-In spoken language, the left hemisphere abstracts the set of sounds spoken

in a language. (4, ID, 153)

So in spoken language, when the baby’s brain can understand the phonemic sounds

found in any language, the language they’re exposed to, the brain abstracts that

particular set of phonemic sounds (the 50 phonemic sounds of English, for example).

-In spoken language, a baby’s brain can understand the phonemes of any

language (4, ID, 154)

-the brain abstracts that set of phonemes. (4, ID, 155)

There are 50 phonemic sounds of English. Now, individual sounds make up the language.

So this is a part of the process of the brain doing what it’s supposed to.

-Individual sounds make up language. (4, NA, 156)

The baby’s brain also abstracts the rules of the language, so the word order for example.

Texas Tech University, Eevin Jennings, August 2018

142

-The baby’s brain abstract the rules of language (4, ID, 157)

-it abstracts the word order. (4, LID, 158)

So the baby’s brain is designed to do this.

-The baby’s brain is designed to do this. (4, LID, 159)

Wernicke’s and Broca’s areas are designed to do what they do. And it’s this abstracting

of the rules that makes this an instinct.

-It is an instinct to abstract rules. (4, NA, 160)

By simply hearing the sounds, or by simply seeing your parents sign over your crib, your

brain is abstracting what these sounds mean.

-By hearing the sounds or seeing your parents sign, your brain abstracts

what the sounds mean. (4, NA, 161)

And more importantly, for any course in neuroscience, the brain is capable of mapping

that sound to meaning.

-The brain maps that sound to meaning. (4, ID, 162)

And that’s what language is about. And that’s what the left hemisphere appears to be

specifically designed to do.

-That’s what the left hemisphere is designed to do. (4, NA, 163)

Now, let’s think about written language though. That isn’t what happens in written

language. You have to be taught to read and write.

-But in written language, you have to be taught to read and write. (5, NA,

164)

There’s no abstracting the general rules by your brain.

-Your brain doesn’t abstract the rules. (5, NA, 165)

You actually have to be taught to read and write. So what areas of the brain have we

discovered play a role in the ability to read and write? Well it turns out that there are two

areas that are found in the parietal lobe in the dominant hemisphere, right here. These

are areas 39 and 40 in the parietal lobe.

-Two areas play a role in the ability to read and write. (5, ID, 166)

-These are found in areas 39 and 40 parietal lobe (5, ID, 167)

-in the dominant hemisphere. (5, ID, 168)

If you have damage to these areas, they result in an acquired illiteracy.

-If these areas are damaged, you become illiterate. (5, ID, 169)

Texas Tech University, Eevin Jennings, August 2018

143

So it means that if a person who could formally read and write has a stroke that involves

that area, and suddenly they can no longer read or write.

-If someone who could read and write has a stroke in this area, they will no

longer be able to read or write. (5, NA, 170)

Notice again that they’re also supplied by the middle cerebral artery.

-These areas are supplied by the middle cerebral artery. (5, ID, 171)

So massive middle cerebral artery strokes devastate the person’s ability to have

language.

-Cerebral artery strokes devastate language. (5, ID, 172)

Now, finally: language in humans is not just about communication.

-Language is not just about communication. (5, NA, 173)

Language actually helps us organize sensory experience.

-Language helps us organize sensory experience. (5, ID, 174)

And this is something that is a great interest to neurolinguists who are very interested in

these issues.

-This is interesting to neurologists. (5, LID, 175)

Most obviously, we categorize objects in our world by words.

-We categorize objects by words (5, NA, 176)

And once that meaning’s mapped to that word, you can’t ever look at something and not

see a table, or not see a chair, or not see a woman, or a cat.

-Once meaning is mapped to a word, you can never see it any other way. (5,

ID, 177)

-You can never look at it and not see a table, or not see a chair, or not see a

woman, or a cat. (5, LID, 178)

You can no longer do this. The word has been mapped to meaning in your brain, and

short of neurological disease, you can’t lose the ability.

-you can’t lose the ability unless you get a neurological disease. (5, ID, 179)

So this is part of what the brain is designed to do.

-This is part of what the brain is designed to do. (5, LID, 180)

So for example, when a child is very little, almost any small four-legged beast is a kitty or

a doggie.

Texas Tech University, Eevin Jennings, August 2018

144

-When a child is little, any four-legged animal is a kitty or a doggie (5, A,

181)

And then as the child acquires language and the brain starts to map meaning to the

words, now all of a sudden a doggie’s a doggie, a kitty’s a kitty.

-As the child learns language, the brain starts to map meaning to words. (5,

NA, 182)

-They start calling a doggie a doggie, and a kitty a kitty. (5, LID, 183)

Just try describing what the real difference is, is that what your brain’s picking up.

-The real difference is what your brain is picking up. (5, NA, 184)

Suddenly the child can pick up the difference between the two types of animals, and never

again will they confuse a dog for a cat.

-The child can pick up the difference between two types of animals (5, NA,

185)

-and never again will they confuse a dog for a cat. (5, LID, 186)

Never again, of course, unless you have a brain lesion.

-Unless you have a brain lesion. (5, NA, 187)

There are actually people who have specific brain lesions who lose the ability to

differentiate between a dog and a cat. There are people who cannot tell the difference

between two different kinds of vegetables. They can’t differentiate between different types

of flowers.

-There are people who have brain lesions who lose the ability to differentiate

between a dog and a cat. (5, NA, 188)

-or two different kinds of vegetables (5, LID, 189)

-or flowers (5, LID, 190)

They can’t put the appropriate word with the object.

-They can’t put the appropriate word with the object. (5, NA, 191)

In addition, and this is sort of beyond the scope of this course, but people can lose very

specific parts of speech.

-People can lose specific parts of speech. (5, NA, 192)

So, for example, when I was a student, I saw an individual presented to a class I was in

who had lost the ability to speak, to read, or to write, nouns. Only nouns.

Texas Tech University, Eevin Jennings, August 2018

145

-When I was a student, we saw a person who had lost the ability to speak,

read, or write only nouns. (5, NA, 193)

So instead of saying, “The sky is blue,” he would say, “The’s blue.”

-So instead of saying, “The sky is blue,” he would say, “The’s blue.” (5, LID,

194)

No break, nothing.

-There were no breaks (5, LID, 195)

Nouns were just gone!

-The nouns were gone. (5, LID, 196)

And there are other parts of speech people can lose. There is nothing (there is no ability

you have) that can’t be lost with the right brain lesion.

-The right brain lesion can cause the loss of any ability. (5, ID, 197)

And that’s the point basically of the whole course. But it’s very important to point that

out. You can actually lose parts of speech. Notice also that thought has a lot to do with

words.

-Thought also has a lot to do with words. (5, NA, 198)

So if we’re silent and we start thinking about something, notice in fact that it’s words that

are coming to mind.

-When we think about something, we think of words. (5, NA, 199)

Think, “I want to go to the store,” and suddenly, or, “I want to move over to the brain

model.”

-Think, “I want to go to the store,” (5, LID, 200)

-or, “I want to move over to the brain model.” (5, LID, 201)

And suddenly I move over and I touch my brain model.

-I move over and touch my brain model. (5, LID, 202)

So internal thought has a lot to do with language.

-So internal thought has a lot to do with language. (5, ID, 203)

Now, the role of the brain obviously in written and spoken language is considerably more

complex than what we have time to cover here. One of the things we’re learning is that

there are habitual ways that we learn how to speak.

-There are habitual ways that we learn how to speak. (5, ID, 204)

Texas Tech University, Eevin Jennings, August 2018

146

So there are people who say, “Uh huh, uh huh,” and there are people who have different

kinds of patterns or habits.

-There are people who say, “Uh huh, uh huh.” (5, LID, 205)

-People have different kinds of patterns or habits. (5, NA, 206)

Well it turns out it looks like the extrapyramidal motor system takes over there, and so

without even thinking, these language areas don’t even need to be called into play

anymore because the individual just responds habitually a certain way to something.

-The extrapyramidal motor system takes over. (5, ID, 207)

-Without even thinking, these language areas are no longer used (5, NA, 208)

-because people respond habitually to something. (5, LID, 209)

And remember those motor programs in the extrapyramidal motor system?

-There are motor programs in the extrapyramidal motor system. (5, LID,

210)

So people who have lesions in the extrapyramidal motor system often lose habitual ways

of speaking and interacting with other people by language, which is very, very

interesting.

-People who have lesions in the extrapyramidal motor system lose habitual

ways of speaking and interacting with other people by language. (5, ID, 211)

Now, one of the last areas (which is a great interest to neurobiology, and also of interest

of people who are in neurolinguistics) will be people who are bilinguals, or people who

speak more than one language.

-Another interesting area is people who are bilingual (6, M, 212)

-or people who speak multiple languages. (6, ID, 213)

-This is of interest to neurobiology, or neurolinguists (6, LID, 214)

So how the brain acquires the first, second, third languages (whatever), how you acquire

these languages seems to be dependent on age.

-The way the brain acquires languages is dependent on age. (6, ID, 215)

So in normal individuals, we acquire our first language when we’re babies, obviously.

-Normal people acquire their first language as babies. (6, ID, 216)

This is when our brain is designed to do this, and it’s working overtime to do so.

-This is when our brain is designed to do this. (6, NA, 217)

-The brain works hard to do this. (6, LID, 218)

Texas Tech University, Eevin Jennings, August 2018

147

Now, if very early in development you’re exposed to other languages, and I actually have

a marvelous example for you.

-There is an example of when you are exposed to other languages in early

development. (6, NA, 219)

There was a secretary in one of the departments in Vanderbilt who was Danish.

-There was a lady who was Danish. (6, NA, 220)

She had a little girl, and she never spoke anything to the child but Danish.

-She had a daughter and only talked to her in Danish. (6, NA, 221)

Her husband was French, and he never spoke any words to the child except French.

-Her husband was French and only talked to her in French. (6, NA, 222)

And everyone else in the world she lived in spoke English to her.

-Everyone else spoke English to her. (6, NA, 223)

Well, initially when she was very little, you know, she’s two years old, she’s starting to

babble. Okay?

-When the daughter was two years old, she started to babble. (6, LID, 224)

She’s getting all the languages mixed up and she’s got a word for this, but she can’t think

of another word for that, but she talks non-stop, you know, because she’s two years old.

-She mixed the languages up and had different words for everything. (6, LID,

225)

-She talked non-stop because she was two years old. (6, LID, 226)

Well then a miracle happened.

-Then a miracle happened. (6, LID, 227)

Somewhere about the age of four, suddenly when she was speaking to her mother, she

spoke only Danish.

-At four years old, she started speaking to her mother in only Danish. (6, ID,

228)

She would speak to her father only French.

-She spoke to her father in French. (6, ID, 229)

When speaking to other people, the babysitter, only English.

-She spoke to other people in English. (6, ID, 230)

Suddenly her brain had separated these languages.

Texas Tech University, Eevin Jennings, August 2018

148

-Her brain had separated the languages. (6, NA, 231)

Broca’s and Wernicke’s area are capable of hearing the sounds in any language, capable

of abstracting the rules of any language

-Broca’s and Wernicke’s area can hear the sounds in any language (6, ID,

232)

And when you’re young, we really don’t know how many languages a person could

potentially learn.

-When you’re young, we don’t know how many languages someone could

learn. (6, NA, 233)

But what happens is that as we age, something occurs in the brain.

-But when we age, something happens in the brain. (6, NA, 234)

This occurs after puberty, and we’ll talk a little bit about some changes that take place in

puberty.

-This occurs after puberty. (6, NA, 235)

-Some changes take place in puberty. (6, LID, 236)

But what happens is that as you age, you lose this ability to have these areas abstract the

rules.

-As you age, you lose the ability to abstract the rules. (6, ID, 237)

Now you have to study language.

-Now you have to study language. (6, NA, 238)

Now you have to bring your hippocampus and your memory into play.

-Now you have to bring your hippocampus and your memory into play. (6,

NA, 239)

Now you have to read the word.

-Now you have to read the word. (6, NA, 240)

And notice what you do when you learn a second language as an adult: you look at the

word and they tell you that “hello” is “hola” in Spanish. And what do you think? “Oh,

‘hola’ means ‘hello.’” No, “hola” means, “hola.”

-When you learn a second language as an adult (6, NA, 241)

-You think that ‘hola’ means ‘hello.’ (6, NA, 242)

-But ‘hola’ means ‘hola.’ (6, LID, 243)

Texas Tech University, Eevin Jennings, August 2018

149

To a child, they don’t translate it into some other language to understand what it means.

-Children don’t translate words into another language to understand them. (6, ID, 244)

It just becomes mapped to meaning in their brain.

-It just becomes mapped to meaning in their brain. (6, ID, 245)

So, how we learn second languages is different, and also something happens around

puberty. An interesting sideline that has happened, which is just really fascinating, is that

there was an evolution of a new language.

-There was an evolution of a new language. (6, NA, 246)

And by the way, little children who are signers, when they’re about 2, they use their

hands, and they talk, and they just babble, just like little kids who speak language.

-2-year-old children who sign use their hands (6, NA, 247)

-They babble just like little kids who speak language (6, LID, 248)

I mean, it’s just fascinating. But before 1979, in Nicaragua, there were children who

were deaf who didn’t understand spoken language (they were deaf), and these children I

believe were orphans.

-Before 1979 in Nicaragua (6, NA, 249)

-There were orphaned, deaf children who didn’t understand spoken

language (6, NA, 250)

And they were all brought together from different areas of Nicaragua, and there were

about 500 of them.

-There were 500 of them (6, NA, 251)

-they were brought together from different areas. (6, LID, 252)

And these were young children. These children initially could not communicate with each

other.

-At first, they could not communicate with each other. (6, ID, 253)

And what happened over time is they developed a full blown, brand new sign language

that had never been seen on the planet Earth.

-They developed their own sign language. (6, ID, 254)

And it had syntax. It had order. It had meaning.

-It had syntax (6, NA, 255)

-It had order (6, NA, 256)

Texas Tech University, Eevin Jennings, August 2018

150

-It had meaning (6, NA, 257)

So language (this incredibly unique capability that we alone, as humans, have) helps us

communicate with others, and organize our sensory experience.

-Language helps us communicate with others (6, M, 258)

-It is a unique ability that only humans have. (6, LID, 259)

We can communicate to other people our feelings. We can try, anyway.

-We can communicate our feelings to others. (6, NA, 260)

This is just an incredible capability that we have, and short of brain damage, you will

always be able to communicate in this fashion with other people.

-Short of brain damage, you will always be able to communicate to others. (6,

NA, 261)

So it’s just an incredibly wonderful capability we have.

Texas Tech University, Eevin Jennings, August 2018

151

APPENDIX C

CUED RECALL TOPIC SELECTION PROCESS

Below is a table demonstrating the semantic overlap for various topic sentences

featured throughout segments. As part of the selection for cued-recall topics, several

different sources were consulted in order to choose topics that represented maximal,

separate concepts within each of the six lecture segments. First, the normed ratings from

a previous experiment were observed to compare which topics had been rated as main

ideas, important details, and less-important details. However, some main ideas often

served as topic sentences that were vague and over-arching, which would not lend

insightful responses for the purposes of cued recall. We then counted the most commonly

recalled idea units from previous experiments utilizing the continuous/restudy condition.

In combination with latent semantic analysis (LSA), topics were selected if they

manifested in a majority of the free recalls from the past experiments, and seemed to rate

higher than other topics per given segment in LSA. Therefore, the 12 topics were selected

not solely based on either output, but as a best representation of both that could allow for

maximal elaboration. For example, in Segment 2, we did not select a highly-scoring

topic, which was “Aphasia is not about articulation.” Rather, selecting a related idea unit

that not only appeared frequently in previous free recalls but also could lead to

predictable, relational elaboration was “Broca’s aphasia.”

Further, in order to measure cued recall more accurately, portions of the idea units were

removed so that participants had to elaborate on the topic as part of the prompt. For

example “Individual sounds are called phonemes” was shortened to simply “phonemes.”

Doing so allowed for a higher possible cued recall score since they would be invited to

first define the item.

One goal using LSA was to further the rationale that each topic utilized was as separate

as possible. Given the highly interrelational nature of lecture ideas, some degree of

semantic overlap is to be expected. However, LSA scores helped in identifying two

semantically and temporally distant topics within each segment.

Idea units in bold were targeted for use in cued recall, and where applicable, idea units in

italics were either the condensed version used for prompts, or different topics used in

place of the high-scoring idea unit.

Concept: Segment 1 LSA: concept to

Segment 1

Texas Tech University, Eevin Jennings, August 2018

152

Individual sounds are called Phonemes .75

Phonemes

Language involves higher order sensory and motor areas .69

Language is an instinct for humans. .70

Language is learned .60

We are the only species that communicates symbolically. .75

Concept: Segment 2 LSA: concept to

Segment 2

A syllable is a morpheme. .78

Language is about conveying meaning. .68

Meaning is conveyed by grammar and syntax .74

Language areas are in the left hemisphere .68

The left hemisphere is dominant even if you are left-handed. .86

The left cerebral hemisphere

Paul Broca and Karl Wernicke were the first to discover

language disorders .73

An aphasia is an acquired disorder of language .71

Aphasia is not about articulation. .93

Broca’s Aphasia

Concept: Segment 3 LSA: concept to

Segment 3

Another type of aphasia is called Wernicke .74

Wernicke’s aphasia is a sensory or receptive aphasia .83

Wernicke’s Aphasia

The main function of language is communication .57

The right hemisphere is involved in prosody .70

Prosody is intonation .84

The right cerebral hemisphere

Concept: Segment 4 LSA: concept to

Segment 4

People with lesions in the non-dominant hemisphere speak flatly .74

Texas Tech University, Eevin Jennings, August 2018

153

Damage to Wernicke’s area in the right hemisphere causes loss

of understanding of emotive communication .72

Another paradigm is about people who sign .89

Sign language

People thought that sign language was a right hemisphere

function .82

Written language is an invention, not an instinct .72

Not everyone will learn how to read or write .77

Written language

Spoken and signed language is an instinct .61

Individual sounds make up language .69

It is an instinct to abstract rules .73

The brain maps that sound to meaning .47

Concept: Segment 5 LSA: concept to

Segment 5

But in written language, you have to be taught to read and write .75

Two areas play a role in the ability to read and write .71

These are found in areas 39 and 40 parietal lobe .70

These areas are supplied by the middle cerebral artery .69

Language helps us organize sensory experience .65

The child can pick up the difference between two types of

animals .75

When a child is little, any four-legged animal is a kitty or a

doggie .80

Mapped to meaning

People can lose specific parts of speech .78

There are people who have brain lesions who lose the ability to

differentiate between a dog and a cat .77

we saw a person who had lost the ability to speak, read, or

write only nouns .85

Thought also has a lot to do with words .89

There are habitual ways that we learn how to speak .85

Texas Tech University, Eevin Jennings, August 2018

154

Habitual patterns of language

Concept: Segment 6 LSA: concept to

Segment 6

Another interesting area is people who are bilingual .81

Bilinguals

There was a lady who was Danish .84

Normal people acquire their first language as babies. .75

As you age, you lose the ability to abstract the rules .81

Now you have to bring your hippocampus and your memory into

play .77

But ‘hola’ means ‘hola.’ .69

You think that ‘hola’ means ‘hello.’ .82

Children don’t translate words into another language to

understand them .81

There was an evolution of a new language .74

Evolution of a new language

There were orphaned, deaf children who didn’t understand

spoken language .81

Language helps us communicate with others .67

Texas Tech University, Eevin Jennings, August 2018

155

APPENDIX D

EXPERIMENTAL INSTRUCTIONS

Listed here are the exact instructions and questions participants will receive. Anything

*italicized between asterisks* stands as a note to the reader to help establish procedural

clarity, and is not a physical part of the experiment. Also note that due to formatting

transition errors, multiple-choice questions appear vertical in this document, but will be

horizontal in the actual experiment.

Page 1

Overall Instructions for Self-Test and Restudy Groups

“Thank you for participating in our study. The first part of the experiment should

take no longer than 60 minutes. When you return tomorrow, the second part of the

experiment should take no longer than 60 minutes. Today, you will be asked to listen to a

30-minute lecture over the topic of language development. Please take notes as you

normally would for a lecture using the paper and black pen provided.

It is important that you take notes since when you return tomorrow, you will be

asked to take comprehensive tests over the material.

Throughout the lecture, the computer may randomly select you to either try to

recall as much as possible from what you had just learned, restudy your notes without

editing them, or clarify and elaborate upon your notes. The computer may select you to

do one of these tasks one or more times, and at any point throughout the lecture.

Please let the experimenter know if you have any questions about this part.

Otherwise, proceed to the next page.”

Overall Instructions for Note revision Groups

“Thank you for participating in our study. The first part of the experiment should

take no longer than 60 minutes. When you return tomorrow, the second part of the

experiment should take no longer than 60 minutes. Today, you will be asked to listen to a

30-minute lecture over the topic of language development. Please take notes as you

normally would for a lecture using the paper and black pen provided.

It is important that you take notes since when you return tomorrow, you will be

asked to take comprehensive tests over the material. In addition, the notes you take will

be given to another participant to study for their test. The other participant will not have a

chance to watch the video lecture, so they will rely only on your notes for their test.

Throughout the lecture, the computer may randomly select you to either try to

recall as much as possible from what you had just learned, restudy your notes without

Texas Tech University, Eevin Jennings, August 2018

156

editing them, or clarify and elaborate upon your notes. The computer may select you to

do one of these tasks one or more times, and at any point throughout the lecture.

Please let the experimenter know if you have any questions about this part.

Otherwise, proceed to the next page.”

Page 2, All

The 30-minute video that you will view today is about how language develops in the

brain. How much do you feel you already know about how language develops in the

brain?

None at all

A little

A moderate amount

A lot

A great deal

Page 3, All

At this time, please put on the headphones provided, and locate the paper and pen for

notetaking. The video will begin to play as soon as you proceed to the next page. Please

note that you will not be able to stop or rewind the video.

When you are ready to begin, please proceed to the next page.

*Video is displayed*

*For interpolated groups only: after the 5-minute segment plays, participants will be

redirected to the interpolated activity portion.*

Interpolated Note revision Group:

Please place the black pen next to the computer monitor, and pick up the red pen.

You have been selected to elaborate upon your notes. You will now have 2

minutes to clarify, revise, and elaborate upon your notes using the red pen. You should

make any changes, add any information that might have been missed, and make any

elaborations that could help the other participant learn the information best since they

will not be able to watch the video before the test.

Texas Tech University, Eevin Jennings, August 2018

157

You must use the entire 2 minutes since the computer will not continue until the 2

minutes have passed. A timer will be displayed for you to see the time remaining.

When you are ready to begin this part, please proceed to the next page.

Interpolated Self-Test Group:

Please place your notes in the folder next to the computer monitor labeled

“Notes.” Place your pen on top of the folder.

You have been selected to test yourself. You will now have 2 minutes to recall as

much information as possible from what you just learned in the lecture. You must use the

entire 2 minutes since the computer will not continue until the 2 minutes have passed. A

timer will be displayed for you to see the time remaining. Please use complete sentences.

When you are ready to begin this part, please proceed to the next page.

Interpolated Restudy Group:

Please place your pen next to the computer monitor. You will not be allowed to

use it for this part.

You have been selected to study your notes. You will now have 2 minutes to

study your notes. You must use the entire 2 minutes since the computer will not continue

until the 2 minutes have passed. A timer will be displayed for you to see the time

remaining. Please do not do anything on the computer or with the pen.

When you are ready to begin this part, please proceed to the next page.

*When participants proceed to the next page, a 2-minute timer counting backwards is

displayed as they carry out their designated task*

All interpolated groups:

You are now being redirected back to the lecture and may continue taking notes.

*10-second countdown displayed*

*Next 5-minute segment begins, process is repeated for all 6 segments of the lecture.*

*Participants in the continuous conditions watch the entire video and are then given the

SAME prompts, only with 12 minutes for their activity rather than 2. There is no

redirection prompt after the 12 minutes ends, as all participants then move on to the next

section.*

Texas Tech University, Eevin Jennings, August 2018

158

Post-lecture questions, all:

You have now finished the lecture portion of the experiment.

Please place your notes in the purple folder labeled "Notes" and put the folder flat

on the desk next to your monitor so that they are out of your way. Place the pen on top of

the folder. You are done with the notes.

You will now be asked to answer several questions.

Proceed to the next page to begin.

Move the slider to answer each question below.

1. What percent of the information in

this video lecture do you think you

could recall after one day?

2. What percent of definitions from this

video lecture do you think you would

get correct after one day?

3. What percent of the information in

this video lecture do you think you

connected together?

4. Roughly, what percent of the lecture

material did you understand?

*Questions 2, 3, and 4 were not analyzed due to potential issues with constructive

validity. Specifically, it was unclear whether participants’ perceptions of “percent”

related to the lecture information objectively or based on a different qualification, such as

information they remembered at the time of rating. In addition, all four questions scored

significantly on multicollinearity tests.

Final page, All:

This concludes the day 1 portion of the experiment. Please proceed to the next page to

complete the activity before quietly leaving.

Please arrive tomorrow for your scheduled day 2 portion of the experiment. Thank you!

Texas Tech University, Eevin Jennings, August 2018

159

Day 2 Instructions

Page 1, All:

Thank you for returning to participate in the second part of the experiment. When you are

ready to begin, please proceed to the next page.

Page 2, All:

Use the slider to answer each question below.

What percent of the information in

yesterday’s video lecture do you think

you could recall today?

Page 3, All (Free Recall):

Think back to what you learned during the lecture yesterday. In the text box below,

please recall as much of the information as you can. There is no time limit. Please use

complete sentences. When you are done, please proceed to the next page.

________________________________________________________________

________________________________________________________________

________________________________________________________________

________________________________________________________________

________________________________________________________________

Page 4, All (Cued Recall & Integration Instructions):

You will now be presented with some topics from yesterday's lecture. Each topic

will be presented one at a time and you will be asked to first elaborate upon that topic,

and then explain how that topic relates to other topics and ideas in the lecture.

Please use complete sentences.

Proceed when you are ready to begin.

Pages 5-17, All (Cued Recall & Integration Questions, in randomized order)

1. Using what you learned from the lecture, elaborate upon the topic presented.

2. Elaborate upon how this topic relates to other topics and ideas in the lecture.

Topic: Phonemes.

Texas Tech University, Eevin Jennings, August 2018

160

1. Using what you learned from the lecture, elaborate upon the topic presented.

2. Elaborate upon how this topic relates to other topics and ideas in the lecture.

Topic: Language is an instinct.

1. Using what you learned from the lecture, elaborate upon the topic presented.

2. Elaborate upon how this topic relates to other topics and ideas in the lecture.

Topic: Left hemisphere

1. Using what you learned from the lecture, elaborate upon the topic presented.

2. Elaborate upon how this topic relates to other topics and ideas in the lecture.

Topic: Broca's Aphasia.

1. Using what you learned from the lecture, elaborate upon the topic presented.

2. Elaborate upon how this topic relates to other topics and ideas in the lecture.

Topic: Wernicke's Aphasia.

1. Using what you learned from the lecture, elaborate upon the topic presented.

2. Elaborate upon how this topic relates to other topics and ideas in the lecture.

Topic: The right cerebral hemisphere.

1. Using what you learned from the lecture, elaborate upon the topic presented.

2. Elaborate upon how this topic relates to other topics and ideas in the lecture.

Topic: Sign language.

1. Using what you learned from the lecture, elaborate upon the topic presented.

2. Elaborate upon how this topic relates to other topics and ideas in the lecture.

Topic: Written language.

1. Using what you learned from the lecture, elaborate upon the topic presented.

2. Elaborate upon how this topic relates to other topics and ideas in the lecture.

Texas Tech University, Eevin Jennings, August 2018

161

Topic: The brain starts to map meaning to words.

1. Using what you learned from the lecture, elaborate upon the topic presented.

2. Elaborate upon how this topic relates to other topics and ideas in the lecture.

Topic: Habitual language patterns.

1. Using what you learned from the lecture, elaborate upon the topic presented.

2. Elaborate upon how this topic relates to other topics and ideas in the lecture.

Topic: Bilinguals.

1. Using what you learned from the lecture, elaborate upon the topic presented.

2. Elaborate upon how this topic relates to other topics and ideas in the lecture.

Topic: Evolution of a new language.

All: Demographic Questionnaire

Your age:

Your gender:

Male

Female

Prefer not to specify

Approximate credit hours completed (after this semester)(a guess is fine):

Classification:

Freshman

Sophomore

Junior

Senior

Other (specify) ______________

Major:

Predicted grade you will make in General Psychology

A+

A

Texas Tech University, Eevin Jennings, August 2018

162

A-

B+

B

B-

C+

C

C-

D+

D

D-

F

How many courses have you taken that used mostly video lectures?

Texas Tech University, Eevin Jennings, August 2018

163

APPENDIX E

LECTURE NOTES CODING RUBRIC

Note Quantity: Words

DO Count:

Dashes that serve a gestural purpose to indicate that a concept is connected to another

concept (i.e. "language - instinct" would count as 3 words)

Any characters * ~ @ # $ % ^ & + = - \ / > <

Arrows

Contractions (there’s = one word)

Abbreviations (P. Lobe stands for “Parietal Lobe” = 2 words)

Shorthand (abo., btwn, w/i, etc.)

Everything in parentheses

w/i, w/, w/o, btwn, (etc)

etc., i.e., ex., e.g.

yrs

Relevant info from outside of lec (extra examples, etc.)

Do NOT Count:

Dashes that are meant to organize and are not connecting information (-left hemisphere)

Parentheses

Punctuation

Lead-ins, such as "The video mentioned that..."

Info about the lecturer ("Jeanette Norden, Ph.D")

The date

Labeling the paper with the term "Notes"

Irrelevant info from outside of lecture ("Lecture is cool!" etc)

Count as ONE Word:

Texas Tech University, Eevin Jennings, August 2018

164

non-dominant

non dominant

nondominant

nonstop

non stop

non-stop

sing-song

sing-song-y

sing song

bilingual

misconception

left-hemisphere (IU 131)

four-legged

Extrapyramidal

Extra pyramidal

neurobiologists

neurolinguists

2-year-old

baby sitter

#40

#22

#39

w/i

Count as SEPARATE Words:

Count as separate words

left hemisphere

right hemisphere

Texas Tech University, Eevin Jennings, August 2018

165

cerebral artery

hemisphere

higher order

New Guinea

buh' 'cuh'

/b/ /c/

inferior frontal gyrus

Brodmans 22 (etc)

sign language

motor system

uh huh

2 years

Syntax/order (or anything like this = 3 words)

Coding Notes Rubric

- The process for coding notes into idea units is similar to what you would do for recalls,

except by their nature, notes are going to be more sparse.

- So, we will be counting number of words to represent length, and INSTANCES of idea

units that match up to the Master Code.

- On the digital copy of the notes, insert a comment for each identifiable idea unit and

label it with:

-The corresponding IU from the master code

-Whether it is an MI, ID, LID, or NA

-Segment

- If you have multiple IUs embedded into one another, you may highlight/differentiate

between idea units with different colors & comments

- Rather than labeling outside information as an "inference" here, we count it as "words

from outside of the lecture."

-Examples not from the lecture

-Tips for studying

Texas Tech University, Eevin Jennings, August 2018

166

-Explanations, elaborations, etc.

Here, you would insert a note for the info and label it as "outside info" and

add the number of words to the "#Words Outside Info" column in the

coding table

Example:

“Lang = instinct”

Would would identify this transcription as Idea Unit 15, “Language is an

Instinct,” which is an Important Detail from Segment 1.

Additional Characters to Count:

-Number of underlines & boxes around words

Some participants may use more organizational cues than others, and underlining,

circling, or boxing certain words is one of these cues.

Count these by the number of separate instances (i.e., "language is an instinct and

a higher order process" = 2)

NOT included in word count

-Number of arrows or "gestural" symbols

Commonly observed as -->, dashes, =, or > < signs indicating that you must look

somewhere in reference

"Language --> instinct" would count as one arrow. "Language - instinct" also

would count.

"-Language = instinct" has only one gestural symbol. The dash here is just a

formatting tool.

Should be included in overall word count as well

-Number of visual diagrams & drawings

Some people will draw a picture of the brain and label it. You would count the

entire drawing instance as 1.

The words associated with the drawing (i.e., "frontal lobe") would be part of the

overall word count.

Does NOT include arrows & the gestural symbols described above.

Texas Tech University, Eevin Jennings, August 2018

167

Does NOT include drawings that are irrelevant to the lecture (doodles)

Note Revisions Rubric

For interpolated note revision and continuous note revision conditions only.

Participants in the note revision conditions are instructed to make edits in RED.

Here we are only analyzing the information in RED that is added in on top of the regular

notes.

The rubric is the same as coding regular notes except for a couple of other variables:

-Number of Idea Units Added from Lecture

Comment and label which IUs., whether they are an MI, ID, LID, or NA, &

segment the examples correspond to

-Number of Examples Added from Lecture

Circle and write in which IUs & segment the examples correspond to

-Number of Examples Added from Outside of Lecture

Elaborations that don't match up to lecture, examples that don't match up, etc.

Label which segment this occurs in

-Number of words from lecture added

Rubric from regular note coding applies

-Number of words from outside of lecture added

Rubric from regular note coding applies

-Number of "proof-reading edits" added

Instances in which misspellings are corrected, info is crossed out & re-written,

formatting is amended, etc.

Texas Tech University, Eevin Jennings, August 2018

168

Note: I excluded number of words across any of the note/revision variables and main

ideas/important details/less important details/N/A from the analysis due to extremely high

variance. I also compiled gestural symbols, diagrams, boxes, and underlines into one

variable (“visual”) due to low scores across each of the columns separately.

Texas Tech University, Eevin Jennings, August 2018

169

APPENDIX F

DEMOGRAPHIC ANALYSES

Participants’ Demographic Information

Demographic Lecture Type Activity Mean Std. Deviation

Prior Knowledge Interpolated Note Revision 1.73 0.69

Restudy 2.27 0.94

Self-Test 2.13 0.57

Total 2.04 0.78

Continuous Note Revision 1.90 0.96

Restudy 2.10 0.61

Self-Test 1.83 0.59

Total 1.94 0.74

Total Note Revision 1.82 0.83

Restudy 2.18 0.79

Self-Test 1.98 0.60

Total 1.99 0.76

Activity: F(2,174) = 1.63, p = .18

Lecture Type: F(1,174) = .81, p = .37

Activity x Lecture Type: F(2,174) = 1.56, p = .21

Age Interpolated Note Revision 18.93 1.57

Restudy 19.17 2.12

Self-Test 19.13 3.65

Total 19.08 2.57

Continuous Note Revision 18.27 1.84

Texas Tech University, Eevin Jennings, August 2018

170

Restudy 18.30 0.84

Self-Test 18.57 1.04

Total 18.38 1.30

Total Note Revision 18.60 1.73

Restudy 18.73 1.66

Self-Test 18.85 2.67

Total 18.73 2.06

Activity: F(2,174) = .22, p = .80

Lecture Type: F(1,174) = .52, p = .14

Activity x Lecture Type: F(2,174) = .08, p = .92

Credit Hours Interpolated Note Revision 29.98 26.22

Restudy 26.05 15.81

Self-Test 28.90 24.95

Total 28.31 22.92

Continuous Note Revision 26.32 20.48

Restudy 24.28 14.73

Self-Test 24.22 18.12

Total 24.94 17.85

Total Note Revision 28.15 23.64

Restudy 25.16 15.21

Self-Test 26.56 21.75

Total 27.13 20.64

Activity: F(2,174) = 2.03, p = .13

Lecture Type: F(1,174) = 2.75, p = .10

Activity x Lecture Type: F(2,174) = .22, p = .80

Texas Tech University, Eevin Jennings, August 2018

171

Formal

Video Lecture

Experience

Interpolated Note Revision 0.77 1.04

Restudy 0.70 1.06

Self-Test 0.87 1.20

Total 0.78 1.09

Continuous Note Revision 0.63 1.00

Restudy 0.47 0.63

Self-Test 0.50 0.68

Total 0.53 0.78

Total Note Revision 0.70 1.01

Restudy 0.58 0.87

Self-Test 0.68 0.98

Total 0.66 0.95

Activity: F(2,174) = .26, p = .77

Lecture Type: F(1,174) = .52, p = .09

Activity x Lecture Type: F(2,174) = 2.94, p = .79

Texas Tech University, Eevin Jennings, August 2018

172

APPENDIX G

FREE RECALL CODING SCHEMA

Counting Words in Free Recall (also applies to cued recall)

DO Count:

Dashes that serve a gestural purpose to indicate that a concept is connected to another

concept (i.e. "language - instinct" would count as 3 words)

Any characters * ~ @ # $ % ^ & + = - \ / > <

Arrows

Contractions (there’s = one word)

Abbreviations (P. Lobe stands for “Parietal Lobe” = 2 words)

Shorthand (abo., btwn, w/i, etc.)

Everything in parentheses

w/i, w/, w/o, btwn, (etc)

etc., i.e., ex., e.g.

yrs

Relevant info from outside of lec (extra examples, etc.)

Do NOT Count:

Dashes that are meant to organize and are not connecting information (-left hemisphere)

Parentheses

Punctuation

Lead-ins, such as "The video mentioned that..."

Info about the lecturer ("Jeanette Norden, Ph.D")

The date

Labeling the paper with the term "Notes"

Irrelevant info from outside of lecture ("Lecture is cool!" etc)

Count as ONE Word:

non-dominant

non dominant

Texas Tech University, Eevin Jennings, August 2018

173

nondominant

nonstop

non stop

non-stop

sing-song

sing-song-y

sing song

bilingual

misconception

left-hemisphere (IU 131)

four-legged

Extrapyramidal

Extra pyramidal

neurobiologists

neurolinguists

2-year-old

baby sitter

#40

#22

#39

w/i

Count as SEPARATE Words:

left hemisphere

right hemisphere

cerebral artery

hemisphere

higher order

New Guinea

buh' 'cuh'

Texas Tech University, Eevin Jennings, August 2018

174

/b/ /c/

inferior frontal gyrus

Brodmans 22 (etc)

sign language

motor system

uh huh

2 years

Syntax/order (or anything like this = 3 words)

Coding Rubric

Spacing:

One unit per line.

If multiple units embedded in one sentence, “return” so that each one is on its own line

and indent.

Example:

Our ancestors had skeletons that allowed for speech at the dawn of our evolution

1. We would first separate this into the two idea units present here

Our ancestors had skeletons that allowed for speech

at the dawn of our evolution

2. Then, we would assign the idea units associated in the master list (idea units 17 and

18) + whether it's an MI, ID, LID, or NA

Our ancestors had skeletons that allowed for speech (17, ID)

at the dawn of our evolution (18, NA)

3. Then, we would award a credit for how complete that unit is. How well does it

semantically match up to the master list?

If it's got the major points and is pretty intact, we would award it with a 1.

Texas Tech University, Eevin Jennings, August 2018

175

If it’s less than 75% there (generally), you would award it a .5, and if very little of the

important content is there, a 0.

Our ancestors had skeletons that allowed for speech (17, 1, ID)

at the dawn of our evolution (18, 1, NA)

4. Then, you would insert a comment and justify why it was given less than 1 credit.

In this example, everything is pretty perfect, so we wouldn’t need to write in anything

else.

1- Close enough to portraying the main point, have most of the important

information there (about 75% or more). We are looking for the semantic/conceptual basis

of what they are saying rather than verbatim.

0.5- Missing a substantial amount of the material, missing a critical part of that

unit (noted in Master Code), but most important info is there & we can tell which IU they

are referencing.

As we age, we aren’t able to hear high frequency sounds = 36, 1 pt, NA

As we age, we can't hear as well = 36, .5, NA (missing semantic

component about high frequency sounds specifically)

0- One or two words (i.e., “left hemisphere” but with no connection to anything

else, or not in a way that can be matched up to an idea unit)

5. Sometimes participants recall statements that don’t match up with idea units.

In this case, highlight the text that is troublesome and insert a comment like this stating

which of these it best represents:

Summary - idea units are too closely entangled to separate into anything

independently meaningful. Insert a comment highlighting the associated text with

a brief explanation as to why it’s a summary and if possible, your best guess as to

which Idea Units are mashed into it.

Inference - Content that doesn’t semantically match up to any idea unit. Usually

outside information that may or may not be correct (relating to their own personal

Texas Tech University, Eevin Jennings, August 2018

176

experience, incorporating other examples, info that may mean something to them

but not to us)

Repeat - Idea unit that was already included at some point earlier in that same

recall. If one of the repeats is more complete than the other, keep the more

complete one in your completion tally.

Wrong - Idea unit that matches up except that the semantic content is incorrect.

You would not count this in the idea unit count. An example would be “the right

hemisphere is dominant for language.” All of those words are correct but it is

conveying a serious conceptual misunderstanding. So you would highlight the

text, comment it as “wrong” with a brief explanation why and which idea unit it

matched up to. These are VERY rare.

6. Assign the segment to which the idea unit belongs

Seg 1 (S-1) Seg 2 (S-2) Seg 3 (S-3) Seg 4 (S-4) Seg 5 (S-5) Seg 6 (S-6)

IUs 1-39 40-86 87-121 122-163 164-211 212-261

Our ancestors had skeletons that allowed for speech (17, 1, ID, S-1)

at the dawn of our evolution (18, 1, NA, S-1)

7. Example of recall and what you should write into the document (already separated out

into units/lines here)

The speaker talked about how language is connected to the brain. Summary

She spoke about how we as babies inherently have the ability to understand language (even if we

cannot write, read, or speak it yet we understand that it is language). Summary

She mostly talked about things that can happen to our ability to understand and produce language

when damage is done to our head/ brain. Summary

In one case, you can lose the ability to understand language being spoken to you. 93, .5, ID, S-5

In another, you can lose the ability to produce language. 79, .5, NA, S-2

The speaker later went on to talk about how language is developed in babies, how we map words

in our brains to certain things. 182, .5, NA, S-5

In one of her examples, most all furry four legged animals to a baby is a cat or a dog 181, 1, NA,

S-5

Texas Tech University, Eevin Jennings, August 2018

177

there is not distinguishing which is which. Inference

At a certain age in development our brain maps out what a cat is and what a dog is Repeat

And the child can distinguish between the two. 185, 1, NA, S-5

Note: I excluded number of words, inferences, summaries, repeats, wrong, and main

ideas/important details/less important details/N/A from the analysis due to low scores

across each of the variables separately.

Texas Tech University, Eevin Jennings, August 2018

178

APPENDIX H

CUED RECALL AND INTEGRATION CODING SCHEMA

Coding cued recall scores: Independent clauses (sentences)

Participants receive two instructions: 1.Elaborate upon the topic presented, and 2.

elaborate upon how it relates to other topics and ideas in the lecture.

EXAMPLE 1: topic is "Phonemes."

Sample recall:

“Phonemes are the sounds that the letters make and older people have a hard time

hearing these. For example we can understand the difference between "cat" and "bat"

because of phonemes.”

1. After counting number of words (unseparated version), separate the sentences

into independent clauses.

“Phonemes are the sounds that the letters make and older people have a hard time

hearing these. For example we can understand the difference between "cat" and "bat"

because of phonemes.”

Becomes:

Phonemes are the sounds that the letters make

Older people have a hard time hearing these.

For example we can understand the difference between "cat" and "bat" because

of phonemes.

2. Assign the segment to which each clause mostly belongs or is referring (in a

comment)

Phonemes are the sounds that the letters make S-1

Older people have a hard time hearing these. S-1

For example we can understand the difference between "cat" and "bat" because

of phonemes. S-1

Texas Tech University, Eevin Jennings, August 2018

179

3. Identify whether the target response is present (see the "Target vs Related"

Sheet)

Phonemes are the sounds that the letters make S-1

Older people have a hard time hearing these. S-1

For example we can understand the difference between "cat" and "bat" because

of phonemes. S-1

(The target response is present. Highlight it in RED. If a prompt is supposed to

have more than one target, highlight all present targets. If only one out of several

is present, highlight the one, and adjust the percent in your final count. The rest of

the clauses recalled in reference to that segment count as "Related." They are

relevant to the prompt but not required. If more than one optional target is

recalled (i.e., 1 out of 2 are required and the person has both in their recall), count

the first one as 100%, and allocate the second one to their "Related" score.)

4. Assign 0 or 1 to that clause, next to the segment.

Phonemes are the sounds that the letters make S-1, 1

Older people have a hard time hearing these. S-1, 1

For example we can understand the difference between "cat" and "bat" because

of phonemes. S-1, 1

5. Add up the number of clauses recalled from the designated segment. (3)

6. Add up the number of clauses recalled from outside of the segment (0).

7. Finally, at the top of each recall, organize the results for that recall by listing

them in red bold font like this:

#Words = 31, #Clauses = 3 (Target = 100%, Related = 2), Integration = 0

Target is 100% here because there was only 1 clause we really focused on to

answer the prompt. If there were 3 required clauses, and the person only listed 1

of them, Target would be 33.3%

EXAMPLE 2: Topic is "Language is an instinct."

Texas Tech University, Eevin Jennings, August 2018

180

“After listening to the lecture I learned that language is an instinct because we are the

only species to be symbolic in our language. Language is vital to us because that is the

basis of everyday communication. Our brains are automatically programmed to learn a

language as we get older and abstract those ideas from certain languages. The opposite

side of that spectrum is learning to read and write. Notice how not every human knows

how to read and write, but every human can speak their respective languages. This is

because reading and writing is an invention that we have to research and practice at.”

This example has multiple embedded clauses, so we need to separate them out into

independent (free-standing) ones.

After listening to the lecture I learned that language is an instinct S-1, 0

we are the only species to be symbolic in our language. S-1, 1

Language is vital to us S-1, 1

Language is the basis of everyday communication. S-3, 1

Our brains are automatically programmed to learn a language S-6, 1 (217)

as we get older (WE) abstract those ideas from certain languages. S-4, 1 (159)

The opposite side of that spectrum is learning to read and write S-4, 1

Notice how not every human knows how to read and write S-4, 1

every human can speak their respective languages. S-4, 1

This is because reading is an invention that we have to research and practice at. S-4, 1

#Words = 105, #Clauses = 9 (Target = 0%, Related = 2) Integration = 7

Notice that this participant addressed the prompt in a somewhat abstract way. He/she

answered the question, but had to recruit info from another segment. In the recall, we

would highlight the content that (fairly enough) answers the prompt but isn't the target

clause we were looking for with MAGENTA, with a note to the side saying "TARGET

FROM OTHER SEGMENT." We would still count this in the overall score, but Target

will be 0%.

Additional Rubric Specifications:

- Connectors (and, but, then, so, because) can be removed when separating clauses out.

- Implied predicate (inserted for clarity) identified in parentheses

Example: as we get older (WE) abstract those ideas from certain languages.

Texas Tech University, Eevin Jennings, August 2018

181

- Reading and writing is spread across segment 4 and 5, so for 4, focus more about

instinct vs learned (except 164, "taught"), and for 5, focus more on the neural &

abstraction

The opposite side of that spectrum is learning to read and write.

Notice how not every human knows how to read and write, but every human can

speak their respective languages.

This is because reading and writing is an invention that we have to research and

practice at.

None of these really reference being taught to read & write, although

they're close.

- If close tie between two different segments, and you are able to decide which segment,

write the idea unit number from the Master Code that helped your decision.

Our brains are automatically programmed to learn a language S-6, 1 (217)

- If t-unit states incorrect information, highlight & comment WRONG.

- If person essentially repeats a t-unit within a single recall, highlight and comment

REPEAT.

- If the unit doesn't tie in to answering the prompt, identify the segment but count as "0"

with the comment "Irrelevant"

- If a sentence seems to extend beyond the lecture info (such as a novel example outside

of the lecture), include in total count but also insert a comment with "Outside

Info"

- Clauses that essentially serve as introductions and things like "today I watched a video

lecture" don't need to be accounted for, so if they are an embedded part of a

clause, that's ok. We just ignore it.

Yesterday I watched a video lecture over the topic of language development S-1, 1

In the lecture she explained that language is an instinct S-1, 1

- Rewriting the prompt does not count toward overall score

Prompt: Language is an instinct

Answer: Language is an instinct to our species S-1, 0

i.e.:

Language is an instinct because we are born with skeletal specializations

--> Language is an instinct S-1, 0

--> we are born with skeletal specializations S-1, 1

If a clause OBVIOUSLY contains content from MULTIPLE segments, separate out and

count as two separate clauses.

Texas Tech University, Eevin Jennings, August 2018

182

Language consist of phonemes and morphemes.

Language consists of phonemes S-1, 1

Language consists of morphemes S-2, 1

The goal is to avoid having to over-complicate coding, but this instance would be

a very easy fix.

Cued Recall Targets

Note that exact wording doesn't matter as long as semantically, they are answering the

prompt based on these target units.

Segment 1

Prompt: Phonemes

Target 1

-Phonemes are individual sounds

Prompt: Language is an instinct

Target 1 of 4

-The left hemisphere shows specialization before birth

-We recognize phonemes at birth

-Our ancestors had skeletons that allowed for speech.

-All people will learn how to speak language

Segment 2

Prompt: The left hemisphere

Target 1 of 2

-The left hemisphere is dominant for language.

-Language centers are in the left hemisphere

Prompt: Broca's Aphasia

Target 1

-Broca’s aphasia is the loss of language production (motor/expressive

aspect)

Texas Tech University, Eevin Jennings, August 2018

183

Segment 3

Prompt: Wernicke's Aphasia

Target 1

-Wernicke’s aphasia is the loss of understanding language

(sensory/receptive aphasia)

Prompt: The Right Hemisphere

Target 1

-The right hemisphere is responsible for the emotional aspects of language

(AKA intonation, prosody, rhythm)

Segment 4

Prompt: Sign Language

Target 1 of 2

-The left hemisphere is also dominant for sign language.

-Sign language uses the same areas (Broca/Wernicke) as spoken language.

Prompt: Written Language

Target 1 of 2

-Written language is an invention/not an instinct

-You have to be taught to read/write

Segment 5

Prompt: The brain starts to map meaning to words

Target 1 of 2

-The baby’s brain is designed to assign meaning to words

-The baby’s brain is designed to categorize objects with words

Prompt: Habitual language patterns

Target 1

-We learn to automatically respond

Texas Tech University, Eevin Jennings, August 2018

184

Segment 6

Prompt: Bilinguals

Target 2 of 3

-Bilingual people speak multiple languages.

-It is harder to learn multiple languages with age

-Children’s brains automatically abstract/map to meaning other languages

Prompt: Evolution of a new language

Target 1

-Children developed a new sign language

Note: I excluded number of words from the analysis due to high variance. Words were

also highly and significantly correlated with number of clauses.