16
Running head: CRITICAL REVIEW OF DIGITAL GAME-BASED LEARNING Critical Review: Digital game-based learning: Impact of instructions and feedback on motivation and learning effectiveness Jennifer Chang Wathall Student ID 663027 Central Michigan University EDU 800 Instructor: Dr. Michael Dennis Deschryver

Critical Review of Research 1

Embed Size (px)

Citation preview

Page 1: Critical Review of Research 1

Running head: CRITICAL REVIEW OF DIGITAL GAME-BASED LEARNING

Critical Review: Digital game-based learning: Impact of instructions and feedback on

motivation and learning effectiveness

Jennifer Chang Wathall Student ID 663027

Central Michigan University

EDU 800

Instructor: Dr. Michael Dennis Deschryver

Page 2: Critical Review of Research 1

CRITICAL REVIEW OF DIGITAL GAME-BASED LEARNING

2

Problem (about 1 page)

1. Identify the clarity with which this article states a specific problem to be explored.

The problem stated appears to lack some clarity and consistency. Erhel and Jamet

(2013) described the problem to be explored was to investigate under what conditions

digital game-based learning (DGBL) is most effective by looking at the type of instructions,

learning or entertainment, given to learners and introducing feedback in a DGBL

environment. The authors assumed that DGBL is more effective than conventional

approaches or media for learning and motivation. In any learning environment, the value on

the type of instruction and quality of feedback can enhance learning and promote deep

learning. In other words, the positive impact of these conditions can be attributed to these

instructional strategies, rather than the DGBL tool itself. The authors concluded that DGBL

can promote motivation and learning in all situations if learners are given opportunities to

actively process the content. This would be true of any environment not just DGBL. What

are the authors trying to prove? Is it whether DGBL is effective for learning and motivation

as compared with conventional teaching approaches or whether using different types of

instruction and feedback promotes learning and motivation in DGBL? If it is the latter, then

many studies have already proven that type of instruction and quality feedback promotes

learning. It seems there are two separate problems. The goal of the researchers should have

been about using different types of instruction and providing quality feedback in DGBL

compared with conventional environments and whether the DGBL enhances learning or

results in deeper learning.

The article identified the issues with researching the benefits of DGBL compared

with conventional approaches in that there are too many uncontrollable variables, but I

Page 3: Critical Review of Research 1

CRITICAL REVIEW OF DIGITAL GAME-BASED LEARNING

3 cannot agree that a value-added approach can prove the benefits of DGBL when

comparing with conventional media or teaching approaches. This will be addressed later in

this paper.

2. Comment on the need for this study and its educational significance as it relates

to this problem.

Digital game-based learning (DGBL) has grown in popularity in recent years due to

the proliferation of children and adolescents spending more time playing video/online

games. With this gain in popularity, many researchers have investigated whether DGBL

truly impacts learning in a deep and meaningful fashion and, in turn, improves learning and

motivation. Most past research on DGBL was highly conflicting in terms if effectiveness on

learning and no clear conclusions can be made about whether DGBL has a positive effect on

learning and motivation. Most studies have concentrated on the effects of DGBL in learning

and motivation compared with conventional environments or conventional media; however,

this study takes a value-added approach by looking at the type of instructions and feedback

given in a DGBL environment and the impact of these on learning and motivation. In other

words, outcomes of learners will be compared when using different versions of the same

DGBL tool. This article discussed briefly the benefits of using DGBL and then continued to

conduct a study to investigate which the type of instructions given to learners and how

giving regular feedback can have more impact on learning and motivation. There is an

educational significance in terms of researching whether DGBL environments are beneficial

to learning, but I am not convinced that this study achieved this.

3. Comment on whether the problem is “researchable”? That is, can it be investigated

through the collection and analysis of data?

Page 4: Critical Review of Research 1

CRITICAL REVIEW OF DIGITAL GAME-BASED LEARNING

4 If we are purely looking at collecting data by looking at two different types of

instructions- learning and entertainment- and providing regular feedback in DGBL, then the

problem is researchable. To begin the experiments with an even playing field, a pretest was

given to eliminate any participants who had any prior medical knowledge. The next stage of

the data collection involved a survey after the DGBL to assess their motivation. The last

survey was completing a questionnaire which included paraphrasing type and inference type

questions.

Theoretical Perspective and Literature Review (about 3 pages)

4. Critique the author’s conceptual framework.

The authors’ conceptual framework is well-researched and supports the benefits of

DGBL from an entertainment perspective in terms of motivation and engagement. Several

studies have also supported the positive relationship of intrinsic motivation and learning

scores in DGBL; however, studies to prove the benefits of DGBL compared with

conventional approaches and other media have been inconclusive and even highly

contradictory.

I completely agree with the authors’ explanations of surface learning and deep

learning by Kester, Kirchner, and Corbalan (2007) and confirmed by Sweller (1999). Deep

learning involves:

“the critical analysis of new ideas, linking them to already known concepts and

principles, and leads to understanding and long-term

retention of concepts so that they can be used for problem solving in unfamiliar

contexts” p158.

Page 5: Critical Review of Research 1

CRITICAL REVIEW OF DIGITAL GAME-BASED LEARNING

5 One of the goals for DGBL should be to encourage deep learning, but I think the question

should be “Does DGBL enhance deep learning compared with conventional teaching

environments?” and not about whether type of instructions and feedback result in deeper

learning. The authors claimed that deep learning does occur in DGBL when learning

instructions are given and also when entertainment instructions are given alongside

feedback, but has learning been enhanced by the use of DGBL? The authors also discussed

the difference between an incidental and intentional learning environment and the benefits

of an intentional learning environment for eliciting deep learning. I do think the inherent

nature of DGBL provides an intentional learning environment for learners which may elicit

deeper cognitive processing; however, this is not what the authors focus on. Instead the

focus for this study is about the type of instructions given by the teacher, specifically in

DGBL to elicit deep learning. If the DGBL environment is well-designed and provides

intentional learning opportunities with clear learning instructions, then surely additional

instructions are redundant. Many examples exist in mathematics learning which utilize

DGBL platforms. Mathletics and Manga High are both DGBL environments which provide

learners with an intentional learning environment that is well designed and presents

purposeful instructions. The specific DGBL used in this study provided another limitation

which will be discussed later in this paper.

5. How effectively does the author tie the study to relevant theory and prior research?

Are all cited references relevant to the problem under investigation?

The authors began by citing references and research about:

• the popularity of DGBL and the need to conduct more research about the effects of

DGBL on learning and motivation (Graesser, Chipman, Leeming & Biedenbach,

Page 6: Critical Review of Research 1

CRITICAL REVIEW OF DIGITAL GAME-BASED LEARNING

6 2009). This research is relevant to the problem under investigation as video and

online gaming gains popularity.

• the features of a DGBL (Mayer & Johnson, 2010 and Prenksy, 2001). This research

about the main features of DGBL is relevant, so there is a common understanding.

• the benefits of DGBL in terms of motivation from an entertainment perspective and

include goal orientation (mastery versus performance), intrinsic/ extrinsic

motivation, interest and self-efficacy. This research is relevant to the problem

presented as the impact on DGBL and motivation is one of the aims.

• benefits of DGBL compared with conventional teaching and using conventional

media. Generally, the studies cited are inconclusive and highly contradictory in

regards to whether DGBL has a positive effect on learning and motivation. This is

where the mismatch lies in this study. The study is trying to prove the effectiveness

of DGBL by using a value-added approach without looking at a baseline for learning

and motivation without DGBL.

• the importance of different types of instructions and feedback to allow for cognitive

processing in an educational setting in general. The research here is relevant to the

problem under investigation and provides a justification to why the researchers

chose to look at types of instruction and feedback. The authors have applied this

research previously based in conventional environments to DGBL environment.

6. Does the literature review conclude with a brief summary of the literature and its

implications for the problem investigated?

There is a brief summary of the literature, and a justification is provided by the

authors in terms of why they chose the value-added approach. The authors explain many

Page 7: Critical Review of Research 1

CRITICAL REVIEW OF DIGITAL GAME-BASED LEARNING

7 studies have compared DGBL with conventional media and are inconclusive when it

comes to motivation and learning so suggest looking at one variable initially; type of

instruction to investigate the effectiveness on motivation and learning in DGBL. The study

conducted a second experiment to look at the effects of each type of instruction with

feedback.

7. Evaluate the clarity and appropriateness of the research questions or hypotheses.

The purpose of the first experiment was to investigate the hypothesis that learning

instruction would impact learners by encouraging them to pursue mastery goals while

entertainment instruction would encourage performance goals. Mastery goals are when a

learner is motivated to develop or master new skills or knowledge while performance goals

are when learners are motivated by one’s ability to succeed. Since the first experiment failed

to reveal any relationship between the type of instruction and either mastery or performance

goals, the authors embarked on a second experiment. The second experiment looked at the

effects of adding feedback in the form of a knowledge of correct response (KCR) and

hypothesized that KCR feedback would reduce redundant, low level thinking, resulting in

learning to be more relevant with both learning and entertainment instruction. The aim

stated is as follows: “to demonstrate that the presence of KCR feedback in the quizzes of a

digital learning game can modify learning strategies induced by the instructions” (p163).

The two hypotheses based on the two experiments were rather unclearly stated in the

body of the paper making them difficult to identify. The appropriateness of these hypotheses

is also to be questioned. How can looking at the types of instruction and giving feedback in

a DGBL prove that DGBL is more effective for learning and motivation compared with

other environments?

Page 8: Critical Review of Research 1

CRITICAL REVIEW OF DIGITAL GAME-BASED LEARNING

8 Research Design and Analysis (about 3 pages)

8. Critique the appropriateness and adequacy of the study’s design in relation to the

research questions or hypotheses.

For the two experiments the authors’ chose to use five experimental phases. Phase

one consisted of a pretest on prior knowledge of the DBGL topic: age associated diseases.

Participants that scored above three out of six were removed from the next stages of the

experiment. Here I question the small number of questions given in the pretest and whether

there was an opportunity for participants to guess questions. In addition, since the questions

were not related to the DGBL topic, how did the authors know that participants did not have

any prior or personal experience with aging associated diseases. To overcome this limitation

a questionnaire about the participant’s medical history and experience could have been

administered.

Phase two consisted of participants following a DGBL simulation on four age

associated diseases: Alzheimer’s, Parkinson’s, myocardial infarction and stroke. Again, I

reiterate how did the authors know that participants did not have any personal experience or

prior knowledge of any of these four diseases.

Phase three consisted of participants completing a quiz about the four diseases with

phase four asking participants to fill in questionnaires about motivation in terms of mastery

and performance goals and intrinsic motivation items. The final phase consisted of a

questionnaire solely based on their knowledge and included two different types of questions:

paraphrase type questions measuring memorization and inference type questions which

involve more intellectual engagement and comprehension.

Page 9: Critical Review of Research 1

CRITICAL REVIEW OF DIGITAL GAME-BASED LEARNING

9 I am surprised that there was no control group to compare the results of groups that

received no instruction or feedback. Including a control group would have allowed the

authors to rule out other variables that may have influenced the experiment.

At the end of this article authors state one of the objectives for this study was to

answer the question “Is deep learning compatible with serious games?” (p. 165). In my

opinion, this objective/ research question was not clearly stated from the outset. One of the

hypothesis presented is “the learning instruction would result in significantly higher scores

on the different learning assessments, especially on inference-type questions assessing the

quality of deep learning” and the design of the experiment did set it to prove or disprove

this.

9. Critique the adequacy of the study’s sampling methods (e.g., choice of participants)

and their implications for generalizability.

For experiment one the authors’ chose 46 participants (24 women, 22 men) aged 18-

26 years who were all university undergraduates. Choosing an age group from 18-26 years

presents limitations with the use of DGBL. This sample represents an older generation that

may not have had a lot of exposure to DGBL so experience and comfort levels with using a

DGBL may vary and bias the results. This group was then split into the learning (9 men, 15

women) and entertainment groups (9 men, 15 women). In terms of statistical analysis each

group is too small in terms of a sample. Most recommendations for data collection are to

have at least 30 data points and I would have recommended at least 30 for each group.

From the pretest results, more participants were excluded further reducing the sample size.

Page 10: Critical Review of Research 1

CRITICAL REVIEW OF DIGITAL GAME-BASED LEARNING

10 In the second experiment, a new group of participants were recruited and

consisted of 16 men and 28 women totaling 44. This represents a slight gender imbalance

which could further skew results. Men may have more tendencies to be more competitive

compared with women and this may be highlighted more in DGBL. Perhaps gender

difference may also reveal a difference in preference for either mastery or performance

goals in terms of motivation. From the pretest four participants were excluded and if all of

these were men then the imbalance between genders would have been even more

pronounced, skewing results further. In terms of generalizability of this study I would

question the external validity of this study’s sampling methods. Due to the gender

imbalances, the small sample sizes and the restricted age group the experiment would not be

generalizable across the general population.

10. Critique the adequacy of the study’s procedures and materials (e.g., interventions,

interview protocols, data collection procedures).

The authors collected data based on a pretest and a quiz about the topic of the

DGBL: the four age associated diseases after DGBL. From the pretest participants who

scored higher then 50% were eliminated from the experiment. The pretest did not involve

any questions about the topic of the DGBL so it is difficult to ascertain whether the

participants who were not eliminated had any prior knowledge or personal experience of the

DGBL topic namely four aged associated disease.

Data was also collected through questionnaires about motivation using performance

and mastery questions and three intrinsic motivation items. The last stage of the data

collection process was to ask participants to answer two types questions based on the DGBL

topic. The two types of questions were paraphrasing questions and inference type questions

Page 11: Critical Review of Research 1

CRITICAL REVIEW OF DIGITAL GAME-BASED LEARNING

11 that required more comprehension. It is important to distinguish between the two types of

questions given to participants to attempt to collect data about memory recall or whether the

conditions such as type of instruction and feedback in an DGBL environment led to deep

learning.

11. Critique the appropriateness and quality (e.g., reliability, validity) of the measures

used.

Reliability of the measures used refers to the consistency of the measurements and if

the experiment was to be repeated whether the experiment would yield the same results.

There are some reliability issues in both experiments in terms of the small number of

participants in each type of instruction group and the gender imbalance. This sampling

problem can cause a random error in the measurements and data collected and if this

experiment were to be repeated could yield completely different results. The most common

measure of reliability is using the reliability coefficient and this study did not include this

calculation to check for reliability. Typical methods in social research include: test-retest,

alternative forms, split halves, inter-rater reliability and internal consistency. The only

internal consistency that could be observed was that the procedures were the same for both

experiment one and two. There is no mention in the study of measuring the internal

consistency using Cronbach’s alpha coefficient. The most important method to increase an

experiment’s reliability is to increase the length of measures, in this case the survey

questions. In the pretest, there were only six questions given and in the questionnaire asking

paraphrasing and inference type questions there were only eight questions in total.

Validity is concerned with how well does a test measure what the researchers have

set out to measure and can relate to the credibility of qualitative research. There are four

Page 12: Critical Review of Research 1

CRITICAL REVIEW OF DIGITAL GAME-BASED LEARNING

12 types of validity and they are: statistical conclusion validity, internal validity, construct

validity and external validity. This study is lacking construct validity which refers to how

well an idea had been transformed or translated into a “functioning and operating reality”

(Drost, 2011). The authors attempt to prove the benefits of DGBL in learning and

motivation based on looking at types of instruction and giving feedback in DGBL. Proving

deep learning is present by using different instructions or feedback in a DGBL does not

prove that the DGBL itself was the cause of the deep learning. Any learning environment

will promote deep learning with different types of instructions and providing quality

feedback. This study is also lacking external validity in that the conclusions cannot be

generalized to the general population, settings or at a different time. As mentioned before,

the sampling methods used in this study have to be called into question in terms of the

external validity.

C. Interpretation and Implications of Results (about 3 pages)

13. Critique the author’s discussion of the methodological and/or conceptual

limitations of the results.

There are three main limitations discussed in this study: issues arising from the

choice of the specific DGBL, data collected from the quizzes on the DGBL and the

methodology that was employed. The choice of the DGBL (ASTRA) proposed limitations

as it was not a well-designed, interactive DGBL example. This is a valid limitation. Many

mathematical DGBL examples are highly interactive, provide quality feedback during

DGBL and have specific learning intentions. ASTRA on the other hand encouraged very

little interaction and appeared to be more of a video presentation rather than providing

authentic DGBL. ASTRA could be seen as a flipped classroom tool providing direct

Page 13: Critical Review of Research 1

CRITICAL REVIEW OF DIGITAL GAME-BASED LEARNING

13 instruction rather than a competitive, interactive DGBL where learners respond

dynamically by being given challenges at different levels.

The second limitation was that generally participants scored highly on the quizzes so

there were very few opportunities to receive feedback. This means that feedback alone

cannot be attributed to deep learning as participants received very little feedback overall.

The third limitation outlined the issue with the methodology in terms of using offline

data to measure the effects of learning or entertainment instruction. This limitation is less

clearly stated and perhaps implies that real time collection of data may be more useful in

order to investigate whether the time taken for deep processing when feedback is provided is

relevant.

14. How consistent and comprehensive are the author’s conclusions with the reported

results?

The authors conclusions are inconsistent in terms of the reported results. The results

from experiment one conclude that learning instruction as opposed to entertainment

instruction in a DGBL environment resulted in higher comprehension scores but had no

effect on motivation. The second experiment concluded from the results that the

entertainment instruction group performed better on the comprehension questions once

feedback was provided. The final conclusion was that entertainment instruction should not

be used in DGBL and is not effective for learning. However, in the second experiment

entertainment instruction with feedback was found to have an effect on inference type

questions and comprehension. These are two conflicting conclusions, however to give credit

Page 14: Critical Review of Research 1

CRITICAL REVIEW OF DIGITAL GAME-BASED LEARNING

14 where it is due, the authors do recognize the fact they cannot explain why the

entertainment instruction was better than the learning instruction in experiment two.

15. How well did the author relate the results to the study’s theoretical base?

The first experiment failed to find any effects of DGBL and motivation however

results were significantly higher in the inference type questions quiz compared with the

paraphrasing type questions for the learning instruction group suggesting that learning

instruction may impact comprehension and deep learning in a DGBL. I pose the question

“How can learners not be able to recall memorized facts but still comprehended and answer

higher order questions?” Without some factual base learners would surely struggle to

comprehend and answer higher level questions, in my opinion.

16. In your view, what is the significance of the study, and what are its primary

implications for theory, future research, and practice?

The conclusions drawn are inconsistent with the reported results. The main

conclusions drawn are that learning instruction in DGBL elicited deeper learning compared

with entertainment instruction however when feedback was introduced entertainment

instruction resulted in deeper learning. Based on this no conclusion about the type of

instructions and the effect in motivation and learning can be made for DGBL. The two

variables: type of instruction and feedback in DGBL, could be used in a conventional

teaching environment and be just as effective or better to promote deep learning. How do we

attribute deeper learning to the DGBL environment compared with conventional

environments or other media? The SAMR (Substitution, Augmentation, Modification,

Redefinition by Puendetura, 2007) model for technology integration suggests that digital

Page 15: Critical Review of Research 1

CRITICAL REVIEW OF DIGITAL GAME-BASED LEARNING

15 tools should be used to encourage higher order cognitive processing. The goal of this

model is to use technology to promote higher levels of achievement. In this particular

study, the DGBL tool ASTRA appeared to be using technology as a substitution/

augmentation level and as a result, the deep learning that occurred in terms of the inference

type of questions cannot be attributed to DGBL.

Future research should look at the design of DGBL and how this fits in with the

SAMR model as illustrated in figure 1. A well designed DGBL tool should be aiming

towards the redefinition level by creating an opportunity for learners to experience deeper

learning that was previously inconceivable with conventional approaches. The goal of

modification and redefinition is to enhance learning and promote deeper cognitive

processing and any digital tool needs to address this in the design stage. So far, the research

is inconclusive and fails to find a positive link between DGBL and learning and motivation

compared with conventional environments of media.

Figure 1 SAMR Model

Puentedura, R. (2013) SAMR Model. Retrieved from http://www.hippasus.com/rrpweblog/

Page 16: Critical Review of Research 1

CRITICAL REVIEW OF DIGITAL GAME-BASED LEARNING

16

References

Drost, Ellen. (2011). Validity and Reliability in Social Science Research. Education Research and

Perspectives. 38. 105-124.

Erhel, S. & Jamet, É. (2013). Digital game-based learning: Impact of instructions and feedback on

motivation and learning effectiveness. Computers & Education. 67. 156-167.

10.1016/j.compedu.2013.02.019.

Puentedura, R. (2013). SAMR: A Contextualised Introduction. Retrieved from

http://www.hippasus.com/rrpweblog/archives/2013/10/25/SAMRAContextualizedIntroduction.pdf