Upload
carlo-magno
View
22
Download
0
Embed Size (px)
Citation preview
Exploring the Conceptual and Psychometric Properties of Classroom Assessment
Richard DLC Gonzales, University of Santo Tomas Graduate School, PhilippinesCharito G. Fuggan, St. Paul University Philippines & Gattaran East District, Department of Education, Philippines
Paper presented at the International Conference of Educational Measurement and Evaluation, Traders Hotel, Manila, August 9-11, 2012
Introduction: What is Assessment?
A process which teacher use to grade students assignments (Harlen, 2008)
A standardized testing imposed by schools (Manzano, 2006)
Any activity designed to collect information to be used as feedback to modify teaching and learning activities (Black & William, 1998) or to improve instruction and students’ performance (Cohen & Hill, 2000).
A means of providing index of student learning (Nitko & Brookart, 2007)
A testing activity that takes place after teachers have taught a particular subject or lesson (Sanchez & Brisk, 2004; Stiggins, 2002).
Why do teachers do classroom assessment?
To collect information about the performance of their students in schools (Bennet & Gitomer, 2009; Nitko & Brookart, 2007).
To inform students of their performance in an assessment process in the form of feedback or feed-forward (Mbelani, 2008; Murray, 2006).
To allow students to know how else they can improve their performance (Bennet & Gitomer, 2009; Mory, 1992)
To inform parents how their children are performing in school (Pophan, 2008, Stiggin, 2002)
To provide information to school administrators and other teachers of the assessment process and results (Earl, 2005; Sheppard, 2000).
Role of Assessment
The important role of assessment in both development of schooling and learning, strongly suggest that teachers much ensure that the assessment process practiced adheres to the highest standards (Jones & Tanner, 2008; Kizlik, 2009).
Classroom assessment must always begin with clear statement of intended learning outcomes and benefits of teaching (Stiggins, et al, 2004).
Once learning targets are defined, the next crucial step in developing assessment measures is to determine what types of questions or tasks are to be included and what form of test is to be used (Robinson-Karpius, 2006; Popham, 2008).
Teachers are required to observe the basic principles and guidelines in constructing an assessment tool (Johnson & Johnson , 2002)
Objectives of the Study
While teachers are trained to develop sound and valid measures, their conception and what they believe in may affect the way they conduct their classroom assessment activities (Danielson, 2008).
Hence, the goal of this present study is to develop a self-report questionnaire on classroom assessment practices of teachers that psychometrically sound.
The term “practices” is defined in this study as activities that teachers do in relation to conducting assessment – from teacher planning to reporting to utilization of test results.
Methods and Procedures
Step 1: Establishing the Construct Definition and Content DomainSimple survey was done during a
series of workshops in the Philippines, Nepal and Kyrgyz Republic
155 teachers, teaching at at least 5 years, were asked to describe the assessment practices
136 initial statements that were generated in the survey were reviewed vis-à-vis review review of related literature.
Methods and Procedures
Step 2: Generating and Judging Measurement Items 136 original items were sorted according to thematic
similarities using the simple Q-sort (Stepheson, 1953). Items were rephrased to style compatible to verbal frequency
statements using a 5-point Verbal Frequency Scale (1 = never…5 = always)
Three psychometrists reviewed the items and determined their fitness to the intended construct and suitability to the identified response format.
The experts were asked to categorize the items into the 7 hypothesized components.
Using a priori criteria (DeVillis, 1991), the panel was asked to review for clarity, relevance, duplication and similarity.
A language expert reviewed the items for intelligibility, clarity and correctness of grammar.
82 items were accepted and 7 items were revised to form the
pretest form of the test.
Methods and Procedures
Step 3: Pilot Testing 89 items were administered to 308
primary/elementary teachers. The pre-test form was administered
during workshops in the Philippines (43%), Nepal (24%) and Kyrgyz Republic (33%).
207 Females and 101 Males, with ages ranging from 26 to 58 (Mean = 43.87; SD = 7.92).
All teachers has at least 5 years of teaching experience and has attended at least one professional development seminar on educational assessment prior to the administration of the pilot questionnaire.
Methods and Procedures
Step 4: Data Analysis Using IBM SPSS Version 19 (2010), descriptive statistics
of the items were computed. Factor structure was assessed using Exploratory Factor
Analysis (EFA) using Principal Component Analysis (PCA), the largest factors were extracted and subject to varimax rotation with Kaizer normalization.
Items were retained if a factor loading was greater than .50.
The number of factors or components were considered if they got at least 1.0 Eigenvalue and contributed at least 3.5% to the variance of sum of squared factor loading.
Cronbach’s Alpha (α) coefficient was calculated to examine the internal consistency/reliability for the total scale and for each identified factor.
Results: Descriptive Statistics
Results: Factor Structure (EFA)
The PCA resulted in 11 Factors with Eigenvalue greater than 1 accounting for 79% of the total variance.
The factors structure of first 5 factors described distinct classroom assessment practices explaining 70.71% of the total variance.
Each of the 5 factors contributed at least 3.5%, with the Factor 1 contributing 49.44%.
The remaining factors could not be easily interpreted.
56 items remained after the EFA.
Results: Psychometric Data = 5 Factor Solution
Factor 1 (Assessment Planning) – 14 items = 49.44%
Factor 2 (Assessment Item Preparation) – 20 items = 6.5%
Factor 3: (Assessment Administration and Scoring) – 12 items = 6.26%
Factor 4: (Reporting and Grading) – 5 items = 4.8%
Factor 5: (Assessment Data Utilization and Evaluation) – 5 items = 3.53%.
The Classroom Assessment Practices Framework
Factor 1:Assessment Planning
Factor 2:Assessment Item
Preparation
Factor 5:Assessment Data
Utilization and Evaluation
Factor 3:Assessment
Administration and Scoring
Factor 4:Reporting of Scores and
Grading
Classroom Assessment
Practices Questionnair
e (CAP-Q)
Results: Descriptive Statistics of the 5 Factors
Results: Internal Consistency and Item Homogeneity
Implications for Use and Further Research
The CAP-Q will be useful to school principals and administrators in determining how teachers conduct classroom assessment.
Since, the CAP-Q is a self-report, it is not easy to assume that a high mean score would imply teachers’ high knowledge and skills, while low score would imply the need for professional development and training.
The CAP-Q could be more useful as a tool for identifying professional development needs of teachers, if the items are converted into an Observational Checklist and validated by teachers’ self-report.
A confirmatory factor analysis (CFA) is suggested to confirm the item and structure of the instrument.