5
Academic Papers Assessment preferences of sport science students Yunus Arslan n 2000 Evler Mah., Zübeyde Hanım Cad., Nevşehir Üniversitesi Yerleşkesi, 50300 Nevşehir, Turkey article info Keywords: Assessment Assessment preferences Education Sport education Sport science students abstract This study examined sport science studentsassessment preferences in different under- graduate courses. Turkish university students (n = 304) completed an adapted version of the Assessment Preferences Inventory (API). Results indicated that self-assessment, observation and peer assessment were the most preferred assessment tools. Multiple choice and performance-based tasks were the most preferred item format and task types, respectively. Physical Education and Sport course students reported a greater preference for Alternative Assessment and Simple/Multiple Choice; Recreation Education course students preferred Classical Assessment and Complex/Constructivist. Consequently, stu- dents wanted prior knowledge of assessment format indicating the need to inform them prior to beginning instruction. & 2013 Elsevier Ltd. All rights reserved. 1. Introduction Assessment is an indispensable and integral element of education, and students' individual differences are the most fundamental emphasis of today's assessment processes. Hence, it is important to consider students' assessment preferences in educational settings. Assessment preference was described as imagined choice between alternatives in assessment types (p. 647)by Van de Watering, Gijbels, Dochy, and Van Der Rijt (2008). Garcia-Ros and Perez-Gonzalez (2011) highlighted that there are basically three complementary reasons for analysing university students' assessment preferences: (1) to demonstrate the relationship between these preferences and the way students' approach learning and studying the materials; (2) to consider new assessment methods in university academic contexts; and (3) to emphasize a perspective of improving the quality of university teaching. As stated in the third reason, the present study examined university students' assessment preferences to determine some clues about how to improve teaching and learning process at the universities. There is an extensive and growing literature that examines students' assessment preferences in different higher education settings (Amin, Kaliyadan, & Al-Muhaidip, 2011; Baeten, Dochy, & Struyven, 2008; Bartram & Bailey, 2010; Birenbaum, 1997; Birenbaum & Feldman, 1998; Birenbaum, 2007; Dogan & Kutlu, 2011; Furnham, Batey, & Martin, 2011; Garcia-Ros & Perez-Gonzalez, 2011; Gijbels & Dochy, 2006; Struyven, Dochy, & Janssens, 2005; Struyven, Dochy & Janssens, 2008; Van de Watering et al., 2008; Zoller & Ben-Chaim, 1997). To author's knowledge, there is no study specifically related to assessment preferences of sport science students (SSSs). SSSs take theoretical and practical modules in the different courses at their schools, and classroom studies are also supplemented by practical studies. In these modules, they learn how to teach their students the importance of lifelong fitness via physical education in elementary, secondary and high schools (The Physical Education and Sport Course), coach their athletes to nurture their athletic and personal growth (The Trainer Education Course), and administer sport programs in community recreation facilities (The Recreation Contents lists available at ScienceDirect journal homepage: www.elsevier.com/locate/jhlste Journal of Hospitality, Leisure, Sport & Tourism Education 1473-8376/$ - see front matter & 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.jhlste.2013.08.003 n Tel.: þ90 3842281004. E-mail address: [email protected] Journal of Hospitality, Leisure, Sport & Tourism Education 13 (2013) 132136

Assessment preferences of sport science students

  • Upload
    yunus

  • View
    214

  • Download
    2

Embed Size (px)

Citation preview

Contents lists available at ScienceDirect

Journal of Hospitality,Leisure, Sport & Tourism Education

Journal of Hospitality, Leisure, Sport & Tourism Education 13 (2013) 132–136

1473-83http://d

n Tel.:E-m

journal homepage: www.elsevier.com/locate/jhlste

Academic Papers

Assessment preferences of sport science students

Yunus Arslan n

2000 Evler Mah., Zübeyde Hanım Cad., Nevşehir Üniversitesi Yerleşkesi, 50300 Nevşehir, Turkey

a r t i c l e i n f o

Keywords:AssessmentAssessment preferencesEducationSport educationSport science students

76/$ - see front matter & 2013 Elsevier Ltd. Ax.doi.org/10.1016/j.jhlste.2013.08.003

þ90 3842281004.ail address: [email protected]

a b s t r a c t

This study examined sport science students’ assessment preferences in different under-graduate courses. Turkish university students (n = 304) completed an adapted version ofthe Assessment Preferences Inventory (API). Results indicated that self-assessment,observation and peer assessment were the most preferred assessment tools. Multiplechoice and performance-based tasks were the most preferred item format and task types,respectively. Physical Education and Sport course students reported a greater preferencefor Alternative Assessment and Simple/Multiple Choice; Recreation Education coursestudents preferred Classical Assessment and Complex/Constructivist. Consequently, stu-dents wanted prior knowledge of assessment format indicating the need to inform themprior to beginning instruction.

& 2013 Elsevier Ltd. All rights reserved.

1. Introduction

Assessment is an indispensable and integral element of education, and students' individual differences are the mostfundamental emphasis of today's assessment processes. Hence, it is important to consider students' assessment preferencesin educational settings.

Assessment preference was described as “imagined choice between alternatives in assessment types (p. 647)” by Van deWatering, Gijbels, Dochy, and Van Der Rijt (2008). Garcia-Ros and Perez-Gonzalez (2011) highlighted that there are basicallythree complementary reasons for analysing university students' assessment preferences: (1) to demonstrate the relationshipbetween these preferences and the way students' approach learning and studying the materials; (2) to consider new assessmentmethods in university academic contexts; and (3) to emphasize a perspective of improving the quality of university teaching. Asstated in the third reason, the present study examined university students' assessment preferences to determine some cluesabout how to improve teaching and learning process at the universities. There is an extensive and growing literature thatexamines students' assessment preferences in different higher education settings (Amin, Kaliyadan, & Al-Muhaidip, 2011; Baeten,Dochy, & Struyven, 2008; Bartram & Bailey, 2010; Birenbaum, 1997; Birenbaum & Feldman, 1998; Birenbaum, 2007; Dogan &Kutlu, 2011; Furnham, Batey, & Martin, 2011; Garcia-Ros & Perez-Gonzalez, 2011; Gijbels & Dochy, 2006; Struyven, Dochy, &Janssens, 2005; Struyven, Dochy & Janssens, 2008; Van de Watering et al., 2008; Zoller & Ben-Chaim, 1997).

To author's knowledge, there is no study specifically related to assessment preferences of sport science students (SSSs). SSSs taketheoretical and practical modules in the different courses at their schools, and classroom studies are also supplemented by practicalstudies. In these modules, they learn how to teach their students the importance of lifelong fitness via physical education inelementary, secondary and high schools (The Physical Education and Sport Course), coach their athletes to nurture their athletic andpersonal growth (The Trainer Education Course), and administer sport programs in community recreation facilities (The Recreation

ll rights reserved.

Y. Arslan / Journal of Hospitality, Leisure, Sport & Tourism Education 13 (2013) 132–136 133

Education Course), and sport associations (The Sport Management Course). Whilst it is commonly understood that assessment isimportant in both theoretical and practical modules, and SSSs are being assessed frequently through conventional and non-conventional assessment procedures to determine whether concepts and modules taught have been learned and mastered wellby them.

There is a need to conduct studies that examine assessment preferences of SSSs who learn about multi-disciplinaryaspects of sport in society. Therefore, the main purpose of the present study was to examine SSSs' assessment preferences.In the current study, to examine assessment preferences of SSSs, the researcher sought to investigate the following researchquestions: (a) what is the general overview of SSS profile in terms of the assessment-form related dimension? and (b) arethere any significant differences among courses in terms of the assessment-form related dimension?

2. Method

In this study, a quantitative, non-experimental and cross-sectional research design was used. The data was collected bymeans of self-report Likert-type scales.

2.1. Participants

Participants in the study were 304 undergraduate SSSs (Mage¼21.42, SDage¼3.04) enrolled in a School of Sport Sciencesand Technology (90 freshmen, 74 sophomores, 74 juniors and 66 seniors) from a Turkish university. 89 participants(42 females and 47 males) were studying in the Physical Education and Sport Course (PES), 72 (25 females and 47 males)in the Trainer Education Course (TED), 82 (42 females and 40 males) in the Recreation Education Course (REC) and 61(39 females and 22 males) in the Sport Management Course (SM). Womenwere 48.7% of the participants (Nwomen¼148), and51.3% was men (Nmen¼156).

2.2. Instrument

“The Assessment Preferences Inventory (API)”, originally developed by Birenbaum (1994) and adapted to Turkish byGülbahar and Büyüköztürk (2008), was used to measure SSSs' assessment preferences in this study. The original API was a5-point Likert-type questionnaire containing 67 items referring to three independent content dimensions: assessment-formrelated dimension, examinee related dimension, and grading and recording. Each item was rated on a 5-point scale indicatingthe extent to which the student would like to be assessed in that manner, where 1¼to a very small extent and 5¼to a verylarge extent. In this study, 32-item assessment-form related dimensions of the API inventory were used. The scales aredescribed in detail below (Büyüköztürk & Gülbahar, 2010; Gülbahar & Büyüköztürk, 2008).

2.2.1. Assessment-form related dimensions (32 items)

1.

Assessment Types Scales (ATS-16 items)Alternative Assessment Types (AAT – projects, portfolios, discussion etc., α¼ .79)Classical Assessment Types (CAT – written and oral exams etc., α¼ .81)

2.

Item Format/Task Type Scales (IFTTS-12 items)Simple/Multiple Choice (S/MC – multiple choice, true–false etc., α¼ .71)Complex/Constructivist (C/C – concept maps, performance based skills etc., α¼ .76)

3.

Pre-assessment Preparation Scale (PAPS-4 items): Guidelines for preparation, examples etc. (α¼ .79)

Before the main study, the inventory was pilot-tested with 245 SSSs in an intact group to determine its validity andreliability. In confirmatory factor analyses (CFA), the indexes used to determine the goodness-of-fit were: (a) root mean squareerror of approximation (RMSEA), for which values below. 08 shows a good fit (Browne & Gudeck, 1993; MacCallum, Browne, &Sugawara, 1996); (b) standardized root-mean-square residual (SRMR), for which values less than .08 suggest a good fit (Hu &Bentler, 1999); and all those indexes for which values greater than .90 indicate a good fit according to Byrne (1994) and .95indicate a good fit according to Hu and Bentler (1999), namely (c) comparative fit index (CFI); (d) goodness-of-fit index (GFI);(e) adjusted goodness-of-fit index (AGFI); and (f) normed-fit index (NFI). In the pilot study, CFA results determined that 32-item assessment-form related dimensions of the API displayed (RMSEA¼ .07, SRMR¼ .04, CFI¼ .92, GFI¼ .93, AGFI¼ .91,NFI¼ .90) acceptable values. The computed internal consistency values for each of the dimensions used in the main studywere stated as follows: ATS – AAT (α¼ .79), CAT (α¼ .81), IFTTS – S/MC (α¼ .71), C/C (α¼ .76) and PAPS (α¼ .79).

2.3. Procedure

Permission was sought from the Directorship of School of Sport Sciences and Technology to conduct this study. The study wasconducted in a manner consistent with the institutional ethical requirements for human experimentation in accordance with the

Y. Arslan / Journal of Hospitality, Leisure, Sport & Tourism Education 13 (2013) 132–136134

Declaration of Helsinki. For the purpose of this research, the assessment-form related dimensions (32 items) of the API inventorywere administered to SSSs in the fall semester of 2011–2012.

2.4. Data analysis

Prior to running statistical analyses, all variables were examined to test the parametric test assumptions (Tabachnick &Fidell, 2007; Please see also, http://en.wikipedia.org/wiki/ Parametric_ statistics). The univariate statistics indicated that thevalues for skewness and kurtosis were within the range of � .21 to þ .03 and � .51 to � .05, respectively. The univariateskewness and kurtosis scores met the criterion of less than 71 for all variables (Pallant, 2010; Büyüköztürk, 2009). A test ofmultivariate normality among variables was not significant (χ2¼1.406, p4 .05), and, standardized values for multivariateskewness and kurtosis were .89 (z¼1.185, p4 .05) and 34.67 (z¼� .03, p4 .05), respectively. According to these findings, itcan be concluded that univariate and multivariate normality assumptions were satisfied. To address the main studypurposes, results are presented as means and standard deviations to summarize the data. One-way multivariate analysis ofvariance (MANOVA) for ATS and IFTTS, and one-way analysis of variance (ANOVA) for PAPS were used to compare meandifferences among courses. When the F-value calculated in the MANOVA was significant, the results of the followingMANOVA were used to assess differences among courses for each dependent variable.

3. Results

3.1. SSSs' preferences about alternative and classical assessment

Results indicated that SSSs prefer self-assessment, observation, peer assessment, oral tests – in the form of a groupdiscussion where the instructor observes and assesses the contribution of each of the participants and individual presentationsas the assessment methods. Conversely, papers/projects, individual oral test – wherein the questions are given half an hourprior the test, and answers can be prepared with supporting materials, e-portfolio, portfolio – collected work, finished and inprogress, written test – with supporting materials and time limit and take-home exams were the least preferred assessmentmethods. Table 1 represents SSSs' preferences about alternative and classical assessment in the entire sample and in eachcourse.

A comparison of the mean scores on the ATS of the API yielded a significant difference among courses of students, Wilks'sΛ¼ .91, F(6, 598)¼4.647, po .05, η2¼ .04. Analysis of variances (ANOVA) on each dependent variable was conducted as follow-up test to the MANOVA. The ANOVAs on the alternative assessment scores, F(3, 300)¼3.942, po .05, η2¼ .03, and classicalassessment scores, F(3, 300)¼5.336, po .05, η2¼ .05, were significant. Post hoc analysis (the Tukey test) to the ANOVA foralternative assessment scores revealed significant difference between PES and TED students, due to PES students obtaininghigher scores. The Tukey test result for classical assessment scores indicated significant difference between REC and PESstudents, due to REC students obtained higher scores.

3.2. SSSs' preferences about item format and task type

Multiple choice format, true–false types questions and matching questions were the most preferred assessment methodsin terms of item format and task type. On the other hand, complex tasks or skills which have more than one possible answer,open-ended questions requiring long answers (essays) and detailed tasks and skills facilitated by the teacher at every stagewere the least preferred assessment methods. Among complex/constructivist item formats and task types, performance-based tasks or skills were the most preferred. Table 2 represents SSSs' preferences about item format and task type in theentire sample and in each course.

As can be seen in Table 2, SSSs mostly prefer items and tasks which appearing under the title of S/MC. The results ofMANOVA, on the mean scores of the IFTTS of the API revealed a significant difference among the courses of students, Wilks'sΛ¼ .95, F(6, 598)¼2.395, po .05, η2¼ .02. Based on the results of MANOVA, ANOVA was conducted on each dependentvariable. The ANOVA on the S/MC scores was not significant, F(3, 300)¼ .912, p4 .05, η2¼ .01, while the ANOVA on C/C scoreswas significant, F(3, 300)¼4.418, po .05, η2¼ .04. The Tukey test results for C/C scores indicated that there are significantmean differences between REC and PES, and also PES and SM students. REC students obtained higher scores when comparedwith PES, TED and SM students.

Table 1Means and standard deviations on the Assessment Types Scales of the Assessment Preferences Inventory in the entire sample and in each course.

Scale Total sample(n¼304)

Physical educationand sport (n¼89)

Trainer education(n¼72)

Recreation education(n¼82)

Sport management(n¼61)

M SD M SD M SD M SD M SD

AssessmentTypes Scales

Alternative Assessment Types 3.08 .69 3.23 .71 2.92 .70 3.17 .69 2.94 .58Classical Assessment Types 2.90 .96 2.63 1.01 2.90 .84 3.21 .93 2.90 .94

Table 2Means and standard deviations on the Item Format/Task Type Scales of the Assessment Preferences Inventory in the entire sample and in each course.

Scale Total sample(n¼304)

Physical educationand sport (n¼89)

Trainer education(n¼72)

Recreation education(n¼82)

Sport management(n¼61)

M SD M SD M SD M SD M SD

Item Format/TaskType Scales

Simple/Multiple Choice 3.56 .70 3.66 .68 3.51 .68 3.55 .73 3.49 .70Complex/Constructivist 3.00 .70 3.12 .62 2.86 .72 3.14 .74 2.81 .65

Table 3Means and standard deviations on the Pre-assessment Preparation Scale of the Assessment Preferences Inventory in the entire sample and in each course.

Scale Total sample(n¼304)

Physical education andsport (n¼89)

Trainer education(n¼72)

Recreation education(n¼82)

Sport management(n¼61)

M SD M SD M SD M SD M SD

Pre-assessment Preparation Scale 4.06 .83 4.07 .58 3.92 .80 4.07 .87 4.19 .78

Y. Arslan / Journal of Hospitality, Leisure, Sport & Tourism Education 13 (2013) 132–136 135

3.3. SSSs' preferences about pre-assessment preparation

SSSs' preferences about pre-assessment preparation in the entire sample and in each course were represented in Table 3.The findings related to PAPS revealed that SSSs want to know all the details of assessment process. The ANOVA on the

mean scores of the PAPS of the API was not significant among the courses of students, F(3, 300)¼1.139, p4 .05, η2¼ .01.

4. Discussion

The examination of students' assessment preferences is significant for understanding factors that impinge the teaching-learning process and its outcomes in sport education. From this point of view, this study was designed to examine SSSs'assessment preferences in different courses.

The results of the current study indicated that in terms of assessment types, SSSs prefer some kinds of alternativeassessment such as self-assessment, observation and peer assessment. Our findings regarding assessment type preferencesshowed some significant differences in terms of SSSs' courses. Based on these findings, PES students prefer alternativeassessments more when compared with TED students, and not surprisingly, REC students wanted to be assessed withclassical assessment tools more than PES students. To become an in-service physical education teacher, PES student musttake both theoretical and practical modules related to pedagogical content knowledge in their physical education teachereducation (PETE) program. In PETE program, they study and practice non-conventional assessments and get familiar withalternative assessment tools. Due to this, our findings may be interpreted as a lack of such kinds of experiences for TED, RECand SM students, or PES students' educational history.

SSSs were asked about their preferences for item format/task types in this study. SSSs preferred items and tasks whichappearing under the title of S/MC. Students preferred multiple choice format, true–false types questions and matching questions.Complex tasks or skills which havemore than one possible answer and open-ended questions requiring long answers (essays) werenot given priority by SSSs in their assessment preferences. The issue of why some students prefer multiple choice format whilesome others prefer essays has been discussed in previous studies (Birenbaum, 2007; Birenbaum & Feldman, 1998; Furnham et al.,2011). According to these studies, university students holding a deep learning approach tended to prefer open-ended (essay) itemswhereas those who hold a surface approach tend to prefer multiple-choice format. Multiple choice format, are often preferredbecause students think this format reduce stress and test anxiety and are easy to prepare for and to take (Traub & MacRury, 1990),and some students prefer multiple choice format because they are used to it, but not because they are good at them (Van deWatering et al., 2008). Still, more researches and efforts are needed to understand the messages those are acquired throughassessment preferences, and to determine the differences between what students want and what students need.

Among C/C task types, performance-based tasks or skills were the most preferred by SSSs. This is not surprising becauseof the performance-based educational system in sport education and students' sports related history. According to ourfindings regarding C/C task types, REC students tended to prefer more complex and constructivist item formats and tasktypes when compared students in the other courses. These findings can be explained as being due to lack of experiences ofPES, TED and SM students or satisfaction level about previous experiences of REC students in their educational program. Intheir multidisciplinary educational program, REC students become versatile and need more complex tasks to show theirabilities in their assessment process. If they are given an assessment methods which contain complex and constructivistitem formats or tasks they likely to be more motivated and hence, perform better (Amin et al., 2011).

Y. Arslan / Journal of Hospitality, Leisure, Sport & Tourism Education 13 (2013) 132–136136

When the results of PAPS were investigated, it is found that SSSs want to know all the details of assessment process.Similar to this current study results, Büyüköztürk and Gülbahar (2010) concluded that students want to have active roles inthe assessment process and they want from their instructors an approach taking into consideration the individualdifferences in the assessment process. In parallel with the above proposition, it can be concluded that formative assessmentprocesses need to be put in place in sport education.

In conclusion, although the current study was limited in scope in that participants included only one School of SportSciences and Technology in Turkey, it provides us with some interesting information about SSSs' assessment preferences. Theresults of the present study indicated that, SSSs preferred some forms of alternative assessment, multiple choice format andperformance-based tasks or skills. There were significant differences in assessment preferences in terms of SSSs' courses andSSSs wanted to know all of the details of the assessment process. This study results may contain a clue about how to improveassessment procedures in sport education that would lead to both SSSs and their teachers finding value in assessment tasksthat enhance both teaching and learning. There was another limitation that needs to be acknowledged and addressedregarding the present study. This limitation was that only quantitative data was gathered from SSSs. Future studies shouldinclude interviews of SSSs to obtain qualitative data to examine each participant's background in terms of their performance,their level of motivation and engagement with their studies, their approach to learning and their learning styles etc., and toexpand the number of assessment alternatives examined which may provide additional insight into their assessmentpreferences. It can also be suggested for future studies that assessment preferences of SSSs be investigated in different types ofcourses related to sport education in several different countries.

References

Amin, T. T., Kaliyadan, F., & Al-Muhaidip, N. S. (2011). Medical students' assessment preferences at King Faisal University, Saudi Arabia. Advances in MedicalEducation and Practice, 2, 95–103.

Baeten, M., Dochy, F., & Struyven, K. (2008). Students' approaches to learning and assessment preferences in a portfolio-based learning environment.Instructional Science, 36, 359–374.

Bartram, B., & Bailey, C. (2010). Assessment preferences: A comparison of UK/international students at an English university. Research in Post-CompulsoryEducation, 15(2), 177–187.

Birenbaum, M. (1994). Toward adaptive assessment-the student's angle. Studies in Educational Evaluation, 20, 239–255.Birenbaum, M. (1997). Assessment preferences and their relationship to learning strategies and orientations. Higher Education, 33, 71–84.Birenbaum, M. (2007). Assessment and instruction preferences and their relationship with test anxiety and learning strategies. Higher Education, 53,

749–768.Birenbaum, M., & Feldman, R. A. (1998). Relationships between learning patterns and attitudes towards two assessment formats. Educational Research, 40

(1), 90–97.Büyüköztürk, Ş., & Gülbahar, Y. (2010). Assessment preferences of higher education students. Egitim Arastirmalari – Eurosian Journal of Educational Research,

41, 55–72.Büyüköztürk, Ş. (2009). Sosyal Bilimler İçin Veri Analizi El Kitabı [Data Analysis Handbook for Social Studies 10th Ed.). Ankara: PEGEM.Browne, M. W., & Gudeck, R. (1993). Alternative ways of assessing model fit. In K. A. Bollen, & J. S. Long (Eds.), Testing Structure Equation Models (pp. 136–

162). Newbury Park, CA: Sage.Byrne, B. M. (1994). Structural Equation Modeling with EQS and EQS/Windows. Thousand Oaks, CA: Sage Publications.Dogan, C. D., & Kutlu, O. (2011). Factors related learning which effect pre-service teachers' preference on alternative assessment methods. Kastamonu

University Kastamonu Education Journal, 19(2), 459–474.Furnham, A., Batey, M., & Martin, N. (2011). How would you like to be evaluated? The correlates of students' preferences for assessment methods.

Personality and Individual Differences, 50, 259–263.Garcia-Ros, R., & Perez-Gonzalez, F. (2011). Assessment preferences of preservice teachers: Analysis according to academic level and relationship with

learning styles and motivational orientation. Teaching in Higher Education, iFirst Article, 1–13.Gijbels, D., & Dochy, F. (2006). Students' assessment preferences and approaches to learning: Can formative assessment make a difference?. Educational

Studies, 32(4), 399–409.Gülbahar, Y., & Büyüköztürk, Ş. (2008). Adaptation of assessment preferences inventory to Turkish. Hacettepe University Journal of Education, 35, 148–161.Hu, L. T., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural

Equation Modeling, 6(1), 1–55.MacCallum, R. C., Browne, M. W., & Sugawara, H. M. (1996). Power analysis and determination of sample size for covariance structure modeling.

Psychological Methods, 1(2), 130–149.Pallant, J. (2010). SPSS Survival Manual: A Step by Step Guide to Data Analysis Using the SPSS Program. Berkshire, England: Open University Press/McGraw-Hill

Education.Struyven, K., Dochy, F., & Janssens, S. (2005). Students' perceptions about evaluation and assessment in higher education: A review. Assessment and

Evaluation in Higher Education, 30(4), 325–341.Struyven, K., Dochy, F., & Janssens, S. (2008). The effects of hands-on experience on students' preferences for assessment methods. Journal of Teacher

Education, 59(1), 69–88.Tabachnick, G. B., & Fidell, S. L. (2007). Using Multivariate Statistics 5th Ed.). Boston: A Pearson Education Inc.Traub, R. E., & MacRury, K. (1990). Multiple choice vs. free response in the testing of scholastic achievement. In K. Ingenkamp, & R. S. Jager (Eds.), Tests und

Trends 8: Jahrbuch der Padagogischen Diagnostik (pp. 128–159). Weinheim und Basel: Beltz.Van de Watering, G., Gijbels, D., Dochy, F., & Van der, Rijt J. (2008). Students' assessment preferences, perceptions of assessment and their relationships to

study results. Higher Education, 56, 645–658.Zoller, U., & Ben-Chaim, D. (1997). Examination type preferences of college science students and their faculty in Israel and USA: A comparative study. School

Science and Mathematics, 97(1), 3–12.