Upload
camilla-joy-fields
View
213
Download
0
Tags:
Embed Size (px)
Citation preview
As we did for the prior class session, please walk through the PP here and complete the embedded tasks within. When complete, email me your responses. Please do so by Thursday this week. THANKS!
Criteria, validity and reliability Planning, preparation, and classroom experiences Engaging all types of learners
Please prepare a brief description of what you believe is an engaging lesson (any single one is fine, or if you want to discuss it in terms of a unit lesson plan, you are welcome to do that as well…)
UBD’s focus: the correct science behind the learning that we design for kids
Many educators are not familiar enough with assessment, grading, statistics (descriptive & inferential)
Examples of correct science in teaching & learning: Use of analytic rubrics When/how to use checklists Knowing what content validity is How to derive a meaningful grade
Use a UBD template to design a unit Interpret 4sight, PSSA, and/or PVASS data Use that data to inform their instruction in class Use technology as a tool to manage, search,
and design instruction Use multiple methods and strategies to engage
students in learning
Most states didn’t have any accountability system The following states did: NY(Regents), Texas (TAS),
and PA (PSSA) PA used a system of accountability beginning in 1997
A system of rewards and sanctions was put into place Rewards for high, sustained attendance & graduation rates Sanctions from state if school was considered persistently
dangerous or underperforming Underperforming schools were placed on PA’s
“empowerment list” where schools could either be reconstituted (taken over by state – Chester is example)
PA was already in a better place than most: schools were accustomed to an accountability system
Problem: PA’s system then only looked at whole school (and subgroups) performance Individual student performance was never tracked No software system tracked student progress within a district nor within
the state State radically changed system to reflect individual student performance
as a result of NCLB
From early 2002 – currently (only 9 years!)…schools have been radically changing professional development to reflect these massive changes
We SHOULD evaluate student work using multiple criteria, which then represents a gradePencil and paper tests and quizzes are a form of
criteria Performance assessments – often lacking, non-existent,
poorly conceived, or weighted inappropriately There is not necessarily 1 correct answer for most
performance assessments Rubrics, using criteria, need to exist in order to judge
student work that is consistent and fair across all students
Briefly respond to the following…Consider how many “grades” should make up
a student’s marking period? Also, consider the types of assessments that should be included in that marking period grade.
Rubric defined: a criterion-based scoring guide that uses fixed measurements and descriptions for each score point
Rubrics differentiate quality, performance, and/or understanding along a continuum
Checklists: used when assessing a yes/no, right/wrong, or present/absent sort of item Checklists are good to use when assessing classroom
participation (includes active participation, behavior, attendance…), but these are NOT rubrics
2 types of rubrics – holistic and analytic PSSA open-ended response (holistic)
Holistic Provides an overall impression Yields a single score (Example: 4,3,2,1) No specific feedback to learner
Analytic (examples seen within this class’s syllabus) Better than holistic Divides performance/product into traits or dimensions separately Are specific to the needs of the learner, and give specific feedback
to the learner
We must provide concrete answers to whether students understand – good rubrics do this
Wiggins & McTighe recommend using at least 2 traits/characteristics in the design of an analytic rubric
Assigning grades to criteria/rubrics Must be careful – giving grades to just about all pieces of work without
making clear the criteria and weight of criterion Not a good practice to average those criterion together – can skew what
we need to know about each learner’s needs This is why raw point scoring is more welcome; it breaks down categories
much better than a scaled or average score (just like how PSSA results break it out by anchor)
A very key component when designing appropriate assessments
Validity defined: refers to the meaning that we can and cannot properly make of specific evidence (including traditional tests & quizzes) Does the assessment measure what it is supposed to
measure? Invalid assessments:
Teacher bias, a poorly written item, or another possible explanation can make an assessment invalid
It is about our understanding of the results, not the test itself
Several specific types Content validity – appropriateness of items tested Predictive validity – can the test predict future
performance in the area that it is measuring? Face validity – the test measures what it is supposed
to measure Construct validity – does the test measure the
constructs/theories/aptitude it’s supposed to measure
Concurrent validity – scores on test related to other existing measures of the same content or behavior
This is a tough one… Based on what is described here about validity, think of any example that supports where you had concerns, thoughts, or other about how validity has affected your assessment of kids, OR your school’s assessment of kids…Think of it as when you/your school should
have exercised better use of valid measures in determining what kids know and can do….
The brother of validity Asks: are results consistent with patterns of student performance?
Meaning, if students took the same test more than once, they’d likely achieve nearly the same score Single results ALONE are not reliable. Refers to the consistency with which knowledge is measured Important factors when designing an assessment
Designing appropriate questions Length of assessment is appropriate – not too long or too short Range of difficulty of items is appropriate Testing environment is conducive to student concentration
It
A guaranteed and viable curriculum is the #1 school level factor impacting student achievement.
-Marzano, What Works in Schools
Colleges & universities must have such map…
To what extent do we have a coherent curriculum from the learner’s perspective?
Think about the extent to which you believe that your school has a mission-based curriculum? Why or why not?
Goal is to view a learning plan in terms of the learning sought, not on the teaching
We must design instruction regardless of how we choose to teach, our style, habits, or other ways that better suit the teacher
It is not about what meets the teacher’s needs We must plan and design instruction based on what is
best for students
ENGAGING EFFECTIVE Lessons that all learners find
thought-provoking, stimulating, challenging, etc.
Pulls students deeper into subject such that they have to engage by the structure you have designed
Can’t be dry, but interesting & relevant work
Students should do more than enjoy – it should engage them such that it is a worthy effort
Help students feel more competent – that they worked toward something important
Students perform to high standards, evidenced through assessments
They develop better skill & understanding, “greater intellectual power”, and self-reflection
Intrinsically Motivated Enjoys the task for its own sake
Eager to learn Exceeds Expectations
Almost always performs perfectly
Extrinsically Motivated Parent Expectations
Pursuit of Other Goals Means to an End
Needs to be pushed to perform/succeed
Motivation – driven by negative Reinforcement Neither Good nor Bad in school Does “just enough” to “get by” Not a behavior problem Often, we may have a whole class of them –
hopefully none of us are this bad…
Lack of Motivation Not Connected to School
Doesn’t Participate Does not Bother Other Students
Hostile to Learning Openly Questions Relevance Openly Opposes Authority
Verbal Non-Verbal
Intrudes on the Rights of Others We all have a few of these kids in our schools
Relevance Rigor Relationships Practicing equity over
equality, consistency over inconsistency, common sense over precedent
Customer Service Understanding your
community’s culture
Totally Engaged Learner Minimal climate problems Fewer kids
disenfranchised by school
Sound relationships High expectations Data-driven decisions Accountability Articulated curriculum Rigorous and relevant instruction Personalized learning Partnerships School Climate Leadership
Research is very clear… The more we design our classrooms that
incorporate a rigorous, relevant, structured, personable, and ENGAGING environment, the more likely we are to reach all 5 types of students that are enrolled in our schools.
PROBLEM: Our perception (teachers’) of what students need and feel to be important is often not necessarily what students’ perceptions tend to be…
Question %Teachers care about my problems and feelings. 46%Teachers respect students. 55%Students respect one another. 31%I give up when school work is too difficult. 19%I put forth my best effort in school. 67%Teachers make school fun an exciting place to learn. 32%School is boring. 46%Students are supportive of one another. 42%
Clear performance goals Hands-on approach throughout – less teaching/more
learning Focus on important/interesting issues, topics, problems,
questions Real-world application as much as possible Sound feedback (both formative & summative) Personalized/customized for learner Clear expectations and models Variety of methods, tasks, activities Teacher is primarily the facilitator Big ideas/understanding evident each day
READ Chapter 12 in UBD: The Design Process Complete the prompt:
Research on the Internet at least 1 blog, wiki, or web 2.0 document that could be used to enhance your instruction in your classroom
Task 5: Send me the link(s) in advance so you can walk us through the link next time on the in class (NOTE: Can be one you use now if you wish…)
Again, please send me all 5 tasks by no later than Thursday this week….