5
LEVERAGING EDUCATIONAL SOFTWARE WITH EXPLORATION GUIDES IN HIGHER ARTS EDUCATION: THE VIDEOLAB SIMULATION CASE STUDY Eduardo Morais 1 , Carla Morais 2 , & João C. Paiva 2 1 Faculty of Engineering, University of Porto (Portugal) 2 Faculty of Sciences, University of Porto (Portugal) Abstract Availability of ‘creative computing’ toolkits has lowered barriers to the development by educators of complex applications, enabling the development of educational software according to their own assessments of needs. VideoLab presents such a case, an educational software designed to assist the teaching of video technology concepts in accordance to an assessment of the needs at a Film Studies program in a Portuguese higher education institution. This software combines simulations and tutorials and was designed according to principles laid out by the Multimedia Learning Theory (Mayer, 2003) and the Cognitive Load Theory (van Merriënboer & Ayres, 2005), and its use in the classroom is complemented by an exploration guide for students (cf. Paiva & Costa, 2010). Empirical assessment of VideoLab’s instructional efficacy was carried out with a total of 40 students over two school years. During specific lessons of the Editing unit, students were given the software and asked to follow the exploration guide while completing some included questions as to measure learning. Other methods were employed simultaneously, including observation by the researcher, screen recording, and a questionnaire based on the Learning Object Evaluation Scale for Students (Kay & Knaack, 2008), as to determine students’ perceptions of learning value, design quality, and engagement. VideoLab was found to be an effective learning object and its design quality was found to be very acceptable. Findings from previous research on exploration guides were also replicated, in that students’ close adherence to the instructions was a key to learning effectiveness. Educational software development thus shouldn’t stop at the software itself, even in the face of a ‘self-sufficient’ design. Consequent implications for the reusability of educational software and simulations should be considered and can indeed be a focus of future research. Keywords: Higher education, Arts education, educational software, exploration guide, case study. 1. Introduction Despite the continued proven efficacy of educational software and computer-assisted instruction in general (cf. Tamim et al., 2011), little specific research and development can be found within the domain of Higher Arts Education. While there is also some evidence of instructional gains from the use of digital technologies in arts education, at least within K-12 levels (Munteanu, Gorghiu, & Gorghiu, 2014), their classroom potential in higher education continues mostly unexplored (Wilks, Cutcher, & Wilks, 2012). This paper presents a case study in the local development and use of an educational software, which the authors feel meets the call for a critical embrace of digital media’s possibilities while resisting technological determinism (Selwyn, 2010; Snyder & Bulfin, 2007). VideoLab met a specific need found in teaching the Editing unit at the Film Studies undergraduate programme of a Portuguese higher education institution. Being a compulsory unit, Editing is challenging to instructors because not all students are interested in the technical side of filmmaking. Despite video editing software having made large strides towards user-friendliness, proper understanding of digital video as its raw material still clarifies many tasks and enables students to solve common problems. To that end, concepts of digital video such as aspect ratio, interlacing, or sampling would typically be explained with the aid of videos, images and diagrams. Those lectures were often met with lack of interest and students would perform poorly at technical tasks in videography. The development and use of simulations allowing manipulation of representations of video concepts was therefore considered adequate, conforming to the notion of material practice that is central to arts education (Hausman, 2000) while drawing from literature that corroborates the effectiveness of simulation and tutorial software (cf. Escueta, Quan, Nickow, & Oreopoulos, 2017). DOI: 10.36315/2019v1end040 Education and New Developments 2019 187

LEVERAGING EDUCATIONAL SOFTWARE WITH …end-educationconference.org/2019/wp-content/...multiple slides with a textual explanation of their topic accompanied by diagrams and images,

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: LEVERAGING EDUCATIONAL SOFTWARE WITH …end-educationconference.org/2019/wp-content/...multiple slides with a textual explanation of their topic accompanied by diagrams and images,

LEVERAGING EDUCATIONAL SOFTWARE WITH EXPLORATION GUIDES

IN HIGHER ARTS EDUCATION:

THE VIDEOLAB SIMULATION CASE STUDY

Eduardo Morais

1, Carla Morais

2, & João C. Paiva

2

1Faculty of Engineering, University of Porto (Portugal) 2Faculty of Sciences, University of Porto (Portugal)

Abstract

Availability of ‘creative computing’ toolkits has lowered barriers to the development by educators of

complex applications, enabling the development of educational software according to their own

assessments of needs. VideoLab presents such a case, an educational software designed to assist the

teaching of video technology concepts in accordance to an assessment of the needs at a Film Studies

program in a Portuguese higher education institution. This software combines simulations and tutorials

and was designed according to principles laid out by the Multimedia Learning Theory (Mayer, 2003) and the Cognitive Load Theory (van Merriënboer & Ayres, 2005), and its use in the classroom is

complemented by an exploration guide for students (cf. Paiva & Costa, 2010). Empirical assessment of

VideoLab’s instructional efficacy was carried out with a total of 40 students over two school years.

During specific lessons of the Editing unit, students were given the software and asked to follow the

exploration guide while completing some included questions as to measure learning. Other methods were

employed simultaneously, including observation by the researcher, screen recording, and a questionnaire

based on the Learning Object Evaluation Scale for Students (Kay & Knaack, 2008), as to determine

students’ perceptions of learning value, design quality, and engagement. VideoLab was found to be an

effective learning object and its design quality was found to be very acceptable. Findings from previous

research on exploration guides were also replicated, in that students’ close adherence to the instructions

was a key to learning effectiveness. Educational software development thus shouldn’t stop at the software itself, even in the face of a ‘self-sufficient’ design. Consequent implications for the reusability of

educational software and simulations should be considered and can indeed be a focus of future research.

Keywords: Higher education, Arts education, educational software, exploration guide, case study.

1. Introduction

Despite the continued proven efficacy of educational software and computer-assisted instruction

in general (cf. Tamim et al., 2011), little specific research and development can be found within the domain of Higher Arts Education. While there is also some evidence of instructional gains from the use of digital technologies in arts education, at least within K-12 levels (Munteanu, Gorghiu, & Gorghiu, 2014), their classroom potential in higher education continues mostly unexplored (Wilks, Cutcher, & Wilks, 2012). This paper presents a case study in the local development and use of an educational software, which the authors feel meets the call for a critical embrace of digital media’s possibilities while resisting technological determinism (Selwyn, 2010; Snyder & Bulfin, 2007).

VideoLab met a specific need found in teaching the Editing unit at the Film Studies undergraduate programme of a Portuguese higher education institution. Being a compulsory unit, Editing is challenging to instructors because not all students are interested in the technical side of filmmaking. Despite video editing software having made large strides towards user-friendliness, proper understanding of digital video as its raw material still clarifies many tasks and enables students to solve common problems. To that end, concepts of digital video such as aspect ratio, interlacing, or sampling would typically be explained with the aid of videos, images and diagrams. Those lectures were often met with lack of interest and students would perform poorly at technical tasks in videography. The development and use of simulations allowing manipulation of representations of video concepts was therefore considered adequate, conforming to the notion of material practice that is central to arts education (Hausman, 2000) while drawing from literature that corroborates the effectiveness of simulation and tutorial software (cf. Escueta, Quan, Nickow, & Oreopoulos, 2017).

DOI: 10.36315/2019v1end040 Education and New Developments 2019

187

Page 2: LEVERAGING EDUCATIONAL SOFTWARE WITH …end-educationconference.org/2019/wp-content/...multiple slides with a textual explanation of their topic accompanied by diagrams and images,

2. Design and development VideoLab’s design followed recommendations acquired from the Multimedia Learning Theory

(Mayer, 2003) and the Cognitive Load Theory (van Merriënboer & Ayres, 2005). The reduction of the complexity of the information presented on screen at any one time, as to reduce any extraneous cognitive load, was a key idea in VideoLab’s design. Realizing this ‘modality effect’ led to the implementation of a modal interface, in which each mode addressed a different set of technical concepts. VideoLab therefore includes four different simulation modes – a) Aspect Ratio, b) Interlacing, c) Luminance, Chrominance and Sampling, d) Bit Depth. Each of VideoLab’s modes also contains a help function which presents multiple slides with a textual explanation of their topic accompanied by diagrams and images, which in articulation with the simulation to follow leverages a ‘multimedia effect’ that positively affects learning (Mayer, 2003). This help function is automatically activated the first time a user loads a specific mode and can be re-invoked at any time. Overall instructions for VideoLab are also automatically presented upon loading the software. In no situation is all the information presented all at once, a progressive approach aligned with the implications of the Cognitive Load Theory.

As seen in Figure 1, the visual interface mimics the look of professional video hardware, under the presumption that the affordances thus provided are beneficial and improve the affective quality and perceived usability of the software (Hassenzahl & Tractinsky, 2006; Norman, 1988).

Figure 1. VideoLab’s user interface (Aspect Ratio mode).

Only the control buttons on the lower right area of the VideoLab application are changed

according to the simulation mode, thus preserving an overall feeling of contiguity in the user interface (Figure 2).

Figure 2. VideoLab’s four simulations modes – Aspect Ratio (top left), Interlacing (top right),

Luminance, Chrominance and Sampling (bottom left), Bit Depth (bottom right).

2.1. Software development VideoLab was built iteratively starting from very simple routines implementing each simulation.

Such development methodology leveraged Processing, a programming framework intended to ease the development of complex software by artists and designers (Reas & Fry, 2006). Having the added benefit of producing cross-platform executables, VideoLab was compiled for both Windows and Mac systems, allowing its installation in a larger number of workstations and greater student reach. The executables and the source code are available in an open repository at https://github.com/edmorais/VideoLab.

ISSN:2184-044X ISBN:978-989-54312-5-0 © 2019

188

Page 3: LEVERAGING EDUCATIONAL SOFTWARE WITH …end-educationconference.org/2019/wp-content/...multiple slides with a textual explanation of their topic accompanied by diagrams and images,

2.2. Exploration guide Concerning how to best integrate the educational software with classroom practice, van

Merriënboer and Ayers suggest “that a good instructional strategy for teaching problem solving starts

with the presentation of worked examples and smoothly proceeds to independent problem solving if

learners acquire more expertise” (van Merriënboer & Ayres, 2005). Exploration guides offer such an

instructional strategy, providing “signposts that help guide students' learning, avoiding dead ends and

dissipation” (Paiva & Costa, 2010). An exploration guide was thus written, describing the steps students

should follow in interacting with each of VideoLab’s modes, resorting to sample video files which were

also included with the software. The end of the guide included a few multiple-choice or short open-ended questions about each mode’s topic as to measure learning outcomes.

3. Method

This paper presents the aggregate results of two studies conducted in the academic years 2014

and 2015 with students enrolled in Editing, a first year unit of a Film Studies undergraduate programme.

Permission for the study was granted by the institution’s academic administration under condition it

didn’t exceed the duration of one single four-hour lecture with each class. It was also stipulated that the

two classes in each year would have to be taught equally, ruling out the use of a control group that would

enable assessment of VideoLab’s instructional effictiveness. Given these constraints, these studies were

thus primarily intended to assess VideoLab’s instructional efficacy and, in addition, students’ engagement

with the software and their opinion about its design quality. The studies’ design employed a mixture of

qualitative and quantitative research instruments to such an end.

3.1. Participants Forty students participated in the studies, divided in two homogenous classes of 20 participants.

In aggregate, 22 participants were male and 18 were female. Participants were aged 18 to 27, with a

median age of 19 and an average age of 20.2. Asked whether they were familiar of educational software,

22 participants reported little experience and 12 reported a higher level of experienced. In addition, when

asked about their familiarity with the subjects covered by VideoLab, 8 claimed full knowledge,

25 reported a little familiarity, and only 7 claimed no prior knowledge.

3.2. Procedure and research instruments

At the start of the lecture on VideoLab’s topics, the study procedure was described to the

students and they were invited to participate. The software and the sample video files were made

available and students were given the choice of running the application in either the classroom’s or in

their own laptop computers. Students were also provided with the exploration guide. Under their written consent, participants also allowed screen recording as to time their interactions with the software. After

beginning, interactions with the software were under observation as participants followed the exploration

guide and completed the included questions. Afterwards, participants were asked to complete a survey

based on the LOES-S – Learning Object Evaluation Scale for Students (Kay & Knaack, 2008) – as to

collect descriptive statistics about their assessment of the software’s learning value (how the software’s

characteristics helps users learn), design quality (how well organized, helpful, and easy to use is the

software), and engagement (how motivating and enjoyable are the software and its theme).

4. Findings

On average, students spent 45.4 minutes interacting with VideoLab. Their mean time spent on

each mode was as follows: a) Aspect Ratio, 10.8’; b) Interlacing, 7.9’; c) Luminance, Chrominance and

Sampling, 11.9’; d) Bit Depth, 6.7’. Participants also spent a mean 7.6 minutes reading modules’ help. It

was noticeable that participants grew impatient as they spent more time in the first mode (Aspect Ratio)

than the last (Colour Depth), even though their contents and simulations may be considered of

comparable complexity. A fifth of participants spent less than 5% of their time reading each mode

explanations and help; a third of participants, on the contrary, spent more than 25% of their time reading

rather than interacting with the simulations.

4.1. Exploration guide outcomes On average, participants were able to answer correctly 61.1% of the questions presented at the

end of the exploration guide. Mean correct answer percentages in each mode were as follows: a) Aspect

Ratio, 67.5%; b) Interlacing, 86.3%; c) Luminance, Chrominance and Sampling, 45.0%; d) Bit Depth,

Education and New Developments 2019

189

Page 4: LEVERAGING EDUCATIONAL SOFTWARE WITH …end-educationconference.org/2019/wp-content/...multiple slides with a textual explanation of their topic accompanied by diagrams and images,

53.8%. Students experienced difficulties with the topic of Luminance, Chrominance and Sampling, while

manifesting a good understanding of Interlacing, a topic which some of the students reported having had

previous contact with during the group interviews.

Despite the reduced number of participants in the study, the data was tentatively subjected to

bivariate analysis. Using the SPSS software, a compelling Spearman 2-tailed correlation (ρ = .591,

p < .001) was detected between the time spent by participants reading each mode’s help and exploration

guide outcomes. A persuasive correlation was also found (ρ = -.432, p = .005) between worse outcomes

and participants switching mode more often, suggestive of a departure from the exploration guide. Usage

time, however, was found to have no correlation with participants’ outcomes. It should also be noted that

no significant correlations were found between participants’ reported knowledge of VideoLab’s content

or prior experience with educational software and their exploration guide outcomes.

4.2. Student evaluation As mentioned, participants filled out the LOES-S questionnaire (Kay & Knaack, 2008), reporting

their agreement, through a five-point Likert scale, to twelve statements about VideoLab’s learning value,

design quality, and engagement. Testing confirmed each construct’s adequate reliability (Cronbach’s

ɑ > .7), as defined by Nunnally (Nunnally, 1978), and factor analysis corroborated their internal

consistency. On a scale from -2 to +2, participants reported a mean learning value of 1.05 (σ = .40), a mean design quality of 1.23 (σ = .51), and a mean engagement with VideoLab of 0.68 (σ = .66).

Therefore, learning value and design quality were perceived as clearly positive, while perceptions of

engagement, even though still positive, were much more ambiguous.

Tentative Spearman 2-tailed analysis also found clear correlations between participants’ opinion

of VideoLab’s learning value and of its design quality (ρ = .602, p < .001), between their opinion of its

learning value and its engagement (ρ = .436, p = .006), as well as between their opinion of VideoLab’s

design quality and its engagement (ρ = .389, p = .014). A conclusive correlation was also found between

students’ regard of VideoLab’s design quality and their exploration guide outcomes (ρ = .538, p < .001).

However, no correlations were found between students reporting a better regard of VideoLab’s learning

value or engagement and their outcomes in completing the exploration guide.

The questionnaire included two additional questions on what participants did and did not like about VideoLab. Overall, participants praised VideoLab’s learning efficacy and ability to communicate

visual concepts. Aware of what would be reflected on their outcomes, participants considered that the

topics of each of the software’s modes presented uneven learning challenges, highlighting that the topic

of Luminance, Chrominance and Sampling was too challenging and hard to understand. Participants did,

however, overestimate their understanding of the Colour Depth topic, while underestimating their

comprehension of Interlacing. Participants also left contradicting comments on whether it would be better

for the content of VideoLab to be discussed by the instructor before or after the use of the software.

Participants also wrote and talked about a number of enhancements they would like to see, such as more

detailed help on the topic of Luminance, Chrominance and Sampling, the ability to overlay multiple

simulation effects, use of ‘tooltips’ in the user interface, a feature allowing them to save video clips.

Some students also praised the visual interface of the software, while others criticized it.

5. Discussion and conclusion

We showed that a stricter adherence to the exploration guide’s instructions (Paiva & Costa,

2010) had a positive effect on participants’ outcomes in correctly answering questions about the topics covered by VideoLab, hinting that the software is incomplete as to its educational efficacy without the

simultaneous application and adherence to the exploration guide. In addition, participants with better

outcomes were more likely to express more positive opinions about VideoLab’s design quality, which

raises the questions of whether the exploration guide is a key contributor to such perception, while

permitting speculation on whether improvements to VideoLab’s ease of use could improve outcomes.

Learning outcomes differed across topics, and the overall difficulties in answering questions about the

topic of Luminance, Chrominance and Sampling, confirmed by the students in their comments, should

make the improvement of that mode a priority for future iterations of VideoLab.

Still, as VideoLab’s learning efficacy has been determined, we feel that it exemplifies how

creative computing can enable the local development of digital educational resources and software, and

empower educators to use computers as expressive media (Meeken, 2012) rather than as mere playback tools for presentations and non-interactive content. Despite being borne out of a specific assessment of

needs, the software and the companion exploration guide can be used by students and educators in other

Higher Education programs, as well as in professional and vocational studies.

ISSN:2184-044X ISBN:978-989-54312-5-0 © 2019

190

Page 5: LEVERAGING EDUCATIONAL SOFTWARE WITH …end-educationconference.org/2019/wp-content/...multiple slides with a textual explanation of their topic accompanied by diagrams and images,

There is much to be gained from further studies. Beyond determining instructional efficacy, our

findings warrant further study with larger populations and controls as to determine the degree of

effectiveness of VideoLab use in the classroom in comparison to other lecture formats. Technology

acceptance research methodologies (cf. Venkatesh, Morris, Davis, & Davis, 2003) may also be relevant to

assess students’ perceptions of VideoLab use, as well as educators’ attitudes toward educational software

development using creative coding tools. Future research is also needed to address the gap in literature

regarding how educational software interacts with the instructor, the curriculum, and other course

materials (Escueta et al., 2017). Closer work with interested educators is thus highly desirable. As

mentioned, VideoLab is open source and new contributors to the project may therefore play a role in the

future developments described, as well as in the distribution, reuse, and further study of the use of

VideoLab in the classroom.

Acknowledgements

This work was supported by the UT Austin/Portugal Program. Eduardo Morais was supported by the

Portuguese Foundation of Science and Technology (FCT) doctoral scholarship PD/BD/128416/2017.

References

Escueta, M., Quan, V., Nickow, A. J., & Oreopoulos, P. (2017). Education Technology:

An Evidence-Based Review (No. 23744). Cambridge, MA. https://doi.org/10.3386/w23744

Hassenzahl, M., & Tractinsky, N. (2006). User experience - a research agenda. Behaviour & Information Technology, 25(2), 91–97. https://doi.org/10.1080/01449290500330331

Hausman, J. J. (2000). Arts Education in the Twenty-First Century: A Personal View. Arts Education

Policy Review, 102(2), 17–18. https://doi.org/10.1080/10632910009599980

Kay, R. H., & Knaack, L. (2008). Assessing learning, quality and engagement in learning objects: the

Learning Object Evaluation Scale for Students (LOES-S). Educational Technology Research and

Development, 57(2), 147–168. https://doi.org/10.1007/s11423-008-9094-5

Mayer, R. E. (2003). The promise of multimedia learning: Using the same instructional design methods

across different media. Learning and Instruction, 13(2), 125–139. https://doi.org/10.1016/S0959-

47520200016-6

Meeken, L. (2012). System Opacity and Student Agency in the New Media Landscape. Voke,

(pilot issue).

Munteanu, L. H., Gorghiu, G., & Gorghiu, L. M. (2014). The Role of New Technologies for Enhancing Teaching and Learning in Arts Education. Procedia - Social and Behavioral Sciences, 122,

245–249. https://doi.org/10.1016/j.sbspro.2014.01.1336

Norman, D. (1988). The Design of Everyday Things. Cambridge, MA: The MIT Press.

Nunnally, J. (1978). Psychometric methods. New York: McGraw-Hill.

Paiva, J. C., & Costa, L. A. (2010). Exploration Guides as a Strategy to Improve the Effectiveness of

Educational Software in Chemistry. Journal of Chemical Education, 87(6), 589–591.

Reas, C., & Fry, B. (2006). Processing: Programming for the media arts. AI and Society, 20, 526–538.

https://doi.org/10.1007/s00146-006-0050-9

Selwyn, N. (2010). Schools and Schooling in the Digital Age: A Critical Analysis. Routledge.

Snyder, I., & Bulfin, S. (2007). Digital Literacy: what it means for Arts Education. In L. Bresler (Ed.),

International Handbook of Research in Arts Education (pp. 1297–1310). Springer. Tamim, R. M., Mohammed, H. Bin, Bernard, R. M., Borokhovski, E., Abrami, P. C., & Schmid, R. F.

(2011). What Forty Years of Research Says About the Impact of Technology on Learning:

A Second-Order Meta-Analysis and Validation Study. Review of Educational Research, 81(1),

4–28. https://doi.org/10.3102/0034654310393361

van Merriënboer, J. J. G., & Ayres, P. (2005). Research on cognitive load theory and its design

implications for e-learning. Educational Technology Research and Development, 53(3), 5–13.

https://doi.org/10.1007/BF02504793

Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information

technology: Toward a unified view. MIS Quarterly, 27(3).

Wilks, J., Cutcher, A., & Wilks, S. (2012). Digital Technology in the Visual Arts Classroom: an [un]easy

partnership. Studies in Art Education, 54(1), 54–65.

Education and New Developments 2019

191