20
GENERAL EDUCATION COURSE REVIEW ENG 1013, Composition II Fall 2013 Response to Fall 2012 Assessment Report and Findings Due to anecdotal evidence of problems and complaints from students and instructors in 2011, four specific areas were targeted for assessment in Fall 2012/Spring 2013. I. Clear communication of goals and student learning outcomes. Led by Dr. Marcus Tribbett, Interim Director of Composition. Prior to 2010 Composition I and II syllabi did not contain learning outcomes and often didn’t contain clear grading standards, percentage breakdown for assignments, or course calendars including assignments and major due dates. In 2010 the Dean of Humanities and Social Sciences instituted the following requirements for all syllabi: Inclusion (in Outcomes/Objectives) of language of the specific General Education goal: Communicating Effectively Inclusion of required texts and materials Inclusion of clear grading policies: percentage breakdown for each assignment and grading scale Inclusion of a course calendar with schedule of major due dates In fall 2012, Dr. Tribbett reviewed all of the syllabi submitted for Composition II to determine compliance with the standard requirements. Findings: Most syllabi were in compliance with the requirements but some needed revision. A few (five or six) were missing the General Education Goal for Communicating Effectively (some had the goal but in different words). Some did not list course requirements clearly, have a grading scale, or have any calendar for assignments. Actions: Individually corresponded with instructors requesting appropriate changes in syllabi and resubmission to HSS. Conclusions and proposed future actions: Most instructors began incorporating the agreed upon goal and met university and department requirements for consistency. The Interim Director presented these findings to Comp II instructors in a meeting that took place in spring 2013 and stressed the importance of having clear grading policies and calendars. Future syllabi will be systematically reviewed for statement of specific learning outcomes and assessment method as well as inclusion of the General Education goal. Institutional Response and Implementation: Prior to the beginning and during the first week of the fall 2013 semester, the Department of English and Philosophy and the Dean of Humanities

ENG$1013,$CompositionII$ … · ... 7.4% 201010 ENG 1013 COMPOSITION II 484 0 294 135 50 61 10 0 ... 3.4% 201160 ENG 1013 COMPOSITION II 216 0 127 92 27 18 6 0 48 ... (common’assignment)

  • Upload
    buinhu

  • View
    218

  • Download
    1

Embed Size (px)

Citation preview

GENERAL  EDUCATION  COURSE  REVIEW    

ENG  1013,  Composition  II  

Fall  2013  Response  to  Fall  2012  Assessment  Report  and  Findings  

Due  to  anecdotal  evidence  of  problems  and  complaints  from  students  and  instructors  in  2011,  four  specific  areas  were  targeted  for  assessment  in  Fall  2012/Spring  2013.      

I.  Clear  communication  of  goals  and  student  learning  outcomes.    Led  by  Dr.  Marcus  Tribbett,  Interim  Director  of  Composition.    

Prior  to  2010  Composition  I  and  II  syllabi  did  not  contain  learning  outcomes  and  often  didn’t  contain  clear  grading  standards,  percentage  breakdown  for  assignments,  or  course  calendars  including  assignments  and  major  due  dates.    In  2010  the  Dean  of  Humanities  and  Social  Sciences  instituted  the  following  requirements  for  all  syllabi:  

Inclusion  (in  Outcomes/Objectives)  of  language  of  the  specific  General  Education  goal:  Communicating  Effectively  

Inclusion  of  required  texts  and  materials  Inclusion  of  clear  grading  policies:    percentage  breakdown  for  each  assignment  and  

grading  scale  Inclusion  of  a  course  calendar  with  schedule  of  major  due  dates  

In  fall  2012,  Dr.  Tribbett  reviewed  all  of  the  syllabi  submitted  for  Composition  II  to  determine  compliance  with  the  standard  requirements.  

Findings:    Most  syllabi  were  in  compliance  with  the  requirements  but  some  needed  revision.    A  few  (five  or  six)  were  missing  the  General  Education  Goal  for  Communicating  Effectively  (some  had  the  goal  but  in  different  words).    Some  did  not  list  course  requirements  clearly,  have  a  grading  scale,  or  have  any  calendar  for  assignments.  

Actions:    Individually  corresponded  with  instructors  requesting  appropriate  changes  in  syllabi  and  resubmission  to  HSS.  

Conclusions  and  proposed  future  actions:    Most  instructors  began  incorporating  the  agreed-­‐upon  goal  and  met  university  and  department  requirements  for  consistency.    The  Interim  Director  presented  these  findings  to  Comp  II  instructors  in  a  meeting  that  took  place  in  spring  2013  and  stressed  the  importance  of  having  clear  grading  policies  and  calendars.    Future  syllabi  will  be  systematically  reviewed  for  statement  of  specific  learning  outcomes  and  assessment  method  as  well  as  inclusion  of  the  General  Education  goal.  

Institutional  Response  and  Implementation:  Prior  to  the  beginning  and  during  the  first  week  of  the  fall  2013  semester,  the  Department  of  English  and  Philosophy  and  the  Dean  of  Humanities  

and  Social  Sciences  sent  several  detailed  emails  to  faculty  regarding  syllabus  requirements.  The  emails  included  written  instructions  as  well  as  a  sample  general  Humanities  and  Social  Sciences  syllabus.  Such  requirements  were  also  reiterated  at  the  Composition  pre-­‐semester  workshop.  A  sample  Composition  II  syllabi  was  distributed  to  faculty  and  also  posted  to  Composition  Instructor  Network,  a  new  online,  interactive  resource  and  instructional  material  repository  for  ASU’s  Composition  faculty  developed  by  the  Director  of  the  Writing  Program,  Dr.  Kristi  Costello.  During  the  fall  2013  semester,  Dr.  Costello  will  create  a  common  Composition  II  syllabus,  complete  with  assignments,  deadlines,  and  policies,  for  optional  use  by  new  Composition  faculty.  Several  new  and  returning  faculty  have  shown  interest  in  the  new  syllabus.    

Fall  2013  Composition  II  syllabi  were  reviewed  for  inclusion  of  general  education  goals,  clear  grading  standards,  percentage  breakdown  for  assignments,  and  course  calendars  including  assignments  and  major  due  dates.  Similar  to  Dr.  Tribbett’s  2012  conclusions,  the  majority  of  fall  2013  syllabi  were  in  compliance  with  ASU’s  requirements.  Dr.  Costello  has  been  individually  corresponding  with  and  mentoring  instructors  whose  syllabi  were  not  in  compliance.  

II.  Grade  Inflation  

In  response  to  a  perception  among  some  English  faculty  and  others  across  the  university  that  grades  in  Composition  II  have  become  inflated  in  recent  years,  data  was  obtained  from  ASU’s  Office  of  Institutional  Research  containing  five  years  of  course  grades  semester  by  semester.      

 

 

Findings:    As  is  apparent  from  the  data  above,  the  skewing  of  grades  toward  “A,”  already  troubling  at  39.2  percent  in  2008,  increased  by  19.4%  percent  to  46.8  percent  in  2012.      This  data  does  indeed  indicate  substantial  grade  inflation  in  the  past  five  years,  with  yearly  increases  in  the  occurrence  of  “A”  grades  and  reductions  of  almost  all  other  grades  over  the  same  time  period.    While  there  are  a  significant  number  of  Honors  and  Concurrent  High  School  sections  of  Composition  II  over  the  five  year  period,  which  would  be  expected  to  produce  a  higher  number  of  “A”  grades  in  those  sections,  the  relative  number  of  Honors,  Concurrent,  and  regular  Composition  II  sections  remained  fairly  constant  over  the  five  years.    Thus,  the  number  of  Honors  and  Concurrent  sections  could  explain  some  level  of  grade  inflation  but  not  the  19.4  percent  increase  in  the  occurrence  of  “A”  grades  over  this  five  year  period.  

Actions:    The  Interim  Director  of  Composition  discussed  grading  and  did  some  norming  exercises  in  the  Fall  2012  workshop,  but  clearly  more  work  needs  to  be  done  in  this  area  to  stress  the  usefulness  of  rubric  grading  for  improved  consistency  across  the  Composition  Program.  The  

Prepared  by  the  Office  of  Institutional  R esearch  &  P lanningThis  information  reflects  what  is  in  the  B anner  S ystem  as  of  the  date  and  time  of  this  report.    R un  Date:  01/28/2013 ;  R un  T ime  1:37  pm.

Assessment of Student CoursesBased on Grades Earned in ENG 1013 - Composition IIArkansas State University-Jonesboro, 2008 - 2012

Term Subject Course # Course Title A AU B C D F FN I W Total A AU B C D F FN I W200810 ENG 1013 COMPOSITION II 520 0 345 216 49 77 0 0 89 1,296 40.1% -- 26.6% 16.7% 3.8% 5.9% -- -- 6.9%200830 ENG 1013 COMPOSITION II 1 0 6 8 1 2 0 0 0 18 5.6% -- 33.3% 44.4% 5.6% 11.1% -- -- --200840 ENG 1013 COMPOSITION II 24 0 10 15 2 4 0 0 4 59 40.7% -- 16.9% 25.4% 3.4% 6.8% -- -- 6.8%200860 ENG 1013 COMPOSITION II 169 0 96 53 16 22 32 0 59 447 37.8% -- 21.5% 11.9% 3.6% 4.9% 7.2% -- 13.2%

Total 2008 714 0 457 292 68 105 32 0 152 1,820 39.2% -- 25.1% 16.0% 3.7% 5.8% 1.8% -- 8.4%

200910 ENG 1013 COMPOSITION II 514 0 275 127 22 38 23 0 78 1,077 47.7% -- 25.5% 11.8% 2.0% 3.5% 2.1% -- 7.2%200930 ENG 1013 COMPOSITION II 38 0 15 7 1 2 0 0 3 66 57.6% -- 22.7% 10.6% 1.5% 3.0% -- -- 4.5%200960 ENG 1013 COMPOSITION II 156 0 115 99 28 42 18 0 40 498 31.3% -- 23.1% 19.9% 5.6% 8.4% 3.6% -- 8.0%

Total 2009 708 0 405 233 51 82 41 0 121 1,641 43.1% -- 24.7% 14.2% 3.1% 5.0% 2.5% -- 7.4%

201010 ENG 1013 COMPOSITION II 484 0 294 135 50 61 10 0 96 1,130 42.8% -- 26.0% 11.9% 4.4% 5.4% 0.9% -- 8.5%201030 ENG 1013 COMPOSITION II 46 0 22 8 10 4 1 0 11 102 45.1% -- 21.6% 7.8% 9.8% 3.9% 1.0% -- 10.8%201060 ENG 1013 COMPOSITION II 192 0 90 72 17 44 16 0 61 492 39.0% -- 18.3% 14.6% 3.5% 8.9% 3.3% -- 12.4%

Total 2010 722 0 406 215 77 109 27 0 168 1,724 41.9% -- 23.5% 12.5% 4.5% 6.3% 1.6% -- 9.7%

201110 ENG 1013 COMPOSITION II 483 1 278 153 30 64 21 0 95 1,125 42.9% 0.1% 24.7% 13.6% 2.7% 5.7% 1.9% -- 8.4%201130 ENG 1013 COMPOSITION II 49 0 15 13 5 1 1 0 3 87 56.3% -- 17.2% 14.9% 5.7% 1.1% 1.1% -- 3.4%201160 ENG 1013 COMPOSITION II 216 0 127 92 27 18 6 0 48 534 40.4% -- 23.8% 17.2% 5.1% 3.4% 1.1% -- 9.0%

Total 2011 748 1 420 258 62 83 28 0 146 1,746 42.8% 0.1% 24.1% 14.8% 3.6% 4.8% 1.6% -- 8.4%

201210 ENG 1013 COMPOSITION II 472 0 239 137 24 48 24 0 67 1,011 46.7% -- 23.6% 13.6% 2.4% 4.7% 2.4% -- 6.6%201230 ENG 1013 COMPOSITION II 70 0 24 13 1 3 1 1 8 121 57.9% -- 19.8% 10.7% 0.8% 2.5% 0.8% 0.8% 6.6%201260 ENG 1013 COMPOSITION II 234 0 110 76 21 27 14 3 40 525 44.6% -- 21.0% 14.5% 4.0% 5.1% 2.7% 0.6% 7.6%

Total 2012 776 0 373 226 46 78 39 4 115 1,657 46.8% -- 22.5% 13.6% 2.8% 4.7% 2.4% 0.2% 6.9%

Percent of Total GradesNumber of Total Grades

Interim  Director  also  researched  scholarly  literature  and  listservs  for  writing  program  administrators  to  determine  whether  Grammarly,  a  program  currently  available  to  all  ASU  faculty,  could  be  useful  in  helping  faculty  objectively  measure  the  level  to  which  essays  meet  the  learning  outcome  of  demonstrating  “proficiency  in  standard  American  English.”  

Proposed  Future  Actions:      The  Composition  Committee  will  work  on  developing  and  approving  a  standard  grading  rubric  for  Composition  I  and  II  essays.  Grade-­‐norming  will  be  more  greatly  emphasized  in  the  fall  2013  Composition  Workshop.    

Institutional  Response  and  Implementation:  A  new  assessment,  one  which  took  a  representative  sample  of  argumentative  essays  (common  assignment)  and  jury  assessed  them  was  conducted  in  spring  2013.    Their  ratings  of  these  essays  were  compared  to  the  grades  the  same  essays  received  in  the  course.  (See  below  for  more  information  regarding  this  study.)  

In  fall  2013,  Dr.  Costello  slightly  adapted  the  rubric  used  in  the  spring  2012  assessment,  developed  by  faculty  who  took  part  in  the  2013  composition  assessment,  to  include  corresponding  grades  to  be  associated  with  the  level  of  mastery  of  the  three  areas  of  import  (also  slightly  revised  to  reflect  best  practices):  Content/Thesis,  Organization  and  Structure,  Style  and  Mechanics.  Dr.  Costello  is  optimistic  that  pairing  grades  with  textual  priorities  will  help  instructors  ensure  that  they  are  assigning  grades  appropriate  to  the  quality  of  work  being  assessed.  Dr.  Costello  distributed  the  rubric  to  faculty  at  the  one-­‐day  training  and  orientation  for  Composition  Link  faculty  and  at  the  general  Composition  pre-­‐semester  workshop.  This  rubric  has  also  been  posted  to  Composition  Instructor  Network  for  instructors’  convenience.    

At  the  general  Composition  pre-­‐semester  workshop,  Dr.  Costello  led  a  grade-­‐norming  session,  initiated  a  discussion  regarding  what  ASU  composition  faculty  value  in  student  writing,  shared  scholarship  on  best  practices  in  composition  grading,  and  strongly  encouraged  instructors  to  conduct  norming  sessions  in  their  own  classrooms.  Recognizing  that  some  instructors’  reluctance  to  assign  quality-­‐based  grades  is  due  to  their  concerns  regarding  student  retention  and  confidence,  Dr.  Costello  hopes  that,  in  line  with  Composition  scholarship,  engaging  in  grade-­‐norming  in  the  composition  classroom  will  help  create  a  common  discourse  for  class  discussions  of  writing  and  facilitate  students’  understanding  of  the  standards  expected  of  student  writers  at  ASU.  As  of  fall  2013,  new  composition  faculty  are  required  to  have  at  least  fifty  percent  of  the  final  grade  in  their  course  based  on  the  quality  of  student  writing,  as  opposed  to  effort,  participation,  completion,  and  attendance  (criteria  heavily  weighted  by  current  composition  faculty.).  This  will  be  required  of  all  faculty  beginning  fall  2014.  Conversations  about  grading  and  grade  inflation  will  be  continued  by  the  Composition  Committee  whose  first  charge  is  to  revise  the  common  rubric  and  thus,  create  community  grading  standards  to  be  endorsed  by  the  ASU  Writing  Program,  required  for  new  faculty,  and  encouraged  of  all  faculty  spring  2014.  The  committee  will  then  decide  whether  or  not  the  rubric  will  become  required  for  all  composition  faculty.    

The  committee  will  also  generate  common  course  objectives  and  requirements  for  a  range  of  numbers  of  papers  and  paper  lengths,  which  should  help  ensure  a  more  consistent  level  of  rigor  across  Composition  II  sections,  to  be  piloted  spring  14  and  implemented  fall  2014.  The  committee  will  also  consider  other  strategies  for  improving  grade  inflation  such  as  holistic  scoring,  multi-­‐reader  grading,  and  portfolio  systems.      

 III.  No  Systematic  Observation  of  Teaching  

Since  the  Composition  Program  has  been  without  a  director  for  several  years,  there  has  been  no  systematic  observation  of  teaching  and  the  Composition  Committee,  comprised  of  interested  faculty  regularly  involved  in  teaching  composition,  had  not  met  for  several  years.  

Findings:    The  Interim  Director  learned  anecdotally  and  through  student  complaints  that  some  composition  instructors  were  not  holding  scheduled  class  meetings  but  were  instead  using  “Blackboard  Fridays”  or  “Blackboard  Thursdays”  to  substitute  regularly  for  class  meetings,  even  though  these  courses  were  listed  as  Traditional  instructional  method  in  the  Class  Schedule.      While  Dr.  Tribbett  was  also  getting  reports  of  excellent  quality  instruction  going  on  in  many  Comp  II  classes,  he  had  reason  to  believe  that  the  quality  of  classroom  instruction  was  inconsistent.    He  therefore  determined  that  we  needed  to  assess  classroom  teaching  through  observation.    

Actions:      

• The  new  Chair  of  English  &  Philosophy  temporarily  banned  the  use  of  “Blackboard  Fridays”  except  in  cases  where  individual  instructors  came  to  her  and  made  a  case  for  the  quality  of  particular  instructional  practices  to  substitute  for  face-­‐to-­‐face  class  time.      

• Faculty  will  conduct  classroom  observations  of  Composition  II  instructors  of  all  ranks  (tenured  to  adjunct)  during  Spring  2013,  having  completed  observations  of  Composition  I  instructors  during  Fall  2012.  

• Revived  Composition  Committee  and  tasked  all  eight  members  with  classroom  observations  taking  academic  rank  into  account  so  that  no  pre-­‐tenure  or  non-­‐tenured  faculty  were  reviewing  tenured  faculty.    Only  tenured  faculty  observed  other  tenured  faculty  to  protect  pre-­‐  and  non-­‐tenured  faculty  from  any  potential  ill  will.      

• Developed  a  draft  rubric  for  observation  with  four  categories  and  a  scale  (see  attached)  despite  faculty  argument  for  narrative-­‐only  comments  

• Shared  preliminary  findings  of  observations  with  Comp  instructors  in  a  meeting  prior  to  Spring  2013  semester  

Proposed  Future  Actions:    Composition  Committee  will  continue  to  conduct  observations  and  will  reconvene  to  discuss  observation  results.  The  committee  will  also  plan  how  to  address  findings  with  eye  toward  making  best  practices  more  widespread  in  the  program  through  teaching  workshops,  training  sessions,  peer  observations,  mentoring,  shared  assignment  databank,  etc.    New  Director  of  Writing  Program  will  work  with  Composition  

Committee  and  get  their  input  on  improving  program-­‐wide  class  instruction  and  on  developing  best  practices  for  Web-­‐Assisted  and  online  classes.  

Institutional  Response  and  Implementation:  The  Chair  of  English  &  Philosophy,  Dr.  Janelle  Collins  has  continued  the  ban  on  “Blackboard  Fridays,”  except  in  a  couple  of  cases  in  which  individual  instructors  made  a  case  for  the  quality  of  particular  instructional  practices  to  substitute  for  face-­‐to-­‐face  class  time.  Faculty  conducted  classroom  observations  of  Composition  II  instructors  of  all  ranks  (tenured  to  adjunct)  will  continue  fall  2013.  All  Composition  II  faculty  will  be  observed  each  academic  year.  Similar  to  last  year,  the  Composition  Committee  will  be  tasked  with  classroom  observations  taking  academic  rank  into  account  so  that  no  pre-­‐tenure  or  non-­‐tenured  faculty  are  reviewing  tenured  faculty.    Only  tenured  faculty  will  observe  other  tenured  faculty  to  protect  pre-­‐  and  non-­‐tenured  faculty  from  any  potential  ill  will.  For  consistency,  observing  faculty  will  utilize  the  same  observation  rubric.  Findings  will  be  shared  with  faculty  spring  2014.    

In  addition  to  continuing  faculty  observations,  several  additional  steps  (explained  in  more  detail  above)  are  being  explored  and  implemented  to  create  a  better  and  consistent  standard  of  quality  in  our  Composition  II  courses:  Composition  Instructor  Network  (which  a  variety  of  assignments,  activities,  readings,  and  other  resources),  a  community  grading  rubric,  common  course  objectives,  improved  training  and  workshops,  and  consistent  longitudinal  assessment.  Already,  the  Composition  instructors  have  been  offered  two  technology-­‐related  pedagogical  workshops  and  an  extended  pre-­‐semester  workshop.    

IV.  No  documentated  use  of  student  learning  data  specifically  related  to  the  General  Education  goal  for  communicating  effectively.  Although  there  have  been  efforts  over  the  years  to  assess  this  goal  using  standardized  tests  and  collection  of  sample  papers,  the  results  were  never  widely  disseminated  or  discussed  in  order  to  lead  to  improved  student  learning.    The  Interim  Director  of  Composition  focused  on  assessment  of  Composition  I  during  Fall  2012  and  Composition  II  during  Spring  2013.  

 Assessment:  The  Interim  Director  collected  one  argumentative  paper  using  sources  from  a  representative  sample  of  Composition  II  courses  and  assessed  them  by  a  jury  of  trained  composition  assessment  raters  using  a  rubric.    The  essays  were  assessed  for  thesis,  coherence,  use  of  sources,  and  grammar/usage.    

ENG  1003,  Composition  II  

Spring  2013  Assessment  Plan  

Needed:    Direct  assessment  data  on  whether  Composition  II  courses  are  meeting  objectives  in  terms  of  measurable  student  outcomes.      Fall  2012  assessment  data  did  show  significant  growth  in  both  the  category  of  competency  with  thesis/coherence  and  also  in  high  competency,  but  as  the  findings  discuss,  the  assignment  needs  to  be  more  standardized  and  the  assessment  needs  

to  be  normed  to  give  the  Composition  Program  more  useful  information.    In  addition,  while  there  is  anecdotal  information  (such  as  the  energetic  faculty  listserv  discussion  Summer  2012)  that  faculty  university-­‐wide  feel  that  our  students  aren’t  proficient  in  standard  American  English,  we  don’t  have  solid  data  to  help  us  analyze  the  extent  of  the  problem  or  if  the  problem  is  worse  among  particular  groups  of  students.      

Assessment:    For  this  assessment,  the  Interim  Director  of  Composition  gathered  the  last  essay  assignment  from  sections  of  Composition  II  representing  as  nearly  as  possible  the  proportions  of  kinds  of  students  (developing,  regular,  Honors)  and  kinds  of  instructors  (tenured/pre-­‐tenure,  non-­‐tenure  track  contract  instructors,  and  temporary  or  part-­‐time  adjuncts).    The  essays  collected  represented  an  end-­‐of-­‐term  argumentative  essay  assignment  (a  position  essay)  common  in  Composition  II.    Student  names  were  removed  but  the  actual  grade  assigned  to  each  essay  was  kept  in  a  separate  file  by  the  Interim  Director  for  comparison  purposes  later  and  the  essays  were  then  coded  to  indicate  the  type  of  section  (remedial,  regular,  Honors)  as  well  as  the  type  of  instructor.  

Instructors  representing  the  range  of  ranks  teaching  Composition  II  classes  were  chosen  for  training  during  spring  2013  and  May  Interim  2013.    These  instructors  met  with  the  Interim  Director,  Henry  Torres,  and  Josie  Welsh  twice  during  spring  2013  to  discuss  the  task,  potential  problems,  what  kinds  of  instructions  need  to  go  out  to  instructors  participating  in  the  assessment,  etc.    These  instructors  looked  at  how  Blackboard  Learn  could  be  used  as  a  platform  for  assessing  these  student  essays.  

After  the  essays  were  collected  at  the  end  of  April  2013,  the  instructors  received  training  from  Henry  Torres  and  ITTC  in  assessment  rating.    This  included  a  norming  session  to  determine  the  level  of  inter-­‐rater  reliability.    Following  the  mid-­‐May  training  session,  the  instructors  analyzed  all  the  student  artifacts  as  homework.  

The  essays  were  analyzed  using  the  Grammarly  tool  available  in  Blackboard.    This  tool  is  available  to  all  ASU  faculty  but  the  Composition  faculty  in  particular  feel  that  it  is  not  reliable  or  useful.      The  Composition  faculty  analyzed  the  first  twelve  lines  of  the  second  page  of  each  student  artifact  for  grammar  and  usage  using  a  rubric,  identifying  and  categorizing  not  only  the  level  of  overall  competence  but  also  individual  areas  of  competence  such  as  subject-­‐verb  agreement,  comma  usage,  and  spelling  and  numbers  of  significant  errors.    Each  artifact  was  analyzed  by  at  least  two  ASU  instructors.    The  Grammarly  analysis  of  usage  and  grammar  was  then  compared  to  the  ASU  faculty  analysis.    The  goal  was  not  only  to  establish  data  showing  the  extent  of  our  student’s  competence  in  standard  American  English  at  the  end  of  Composition  II  (proficiency  in  standard  American  English  is  a  key  part  of  the  General  Education  learning  outcome),  but  also  the  extent  to  which  Grammarly  might  or  might  not  be  a  useful  tool  for  Composition  instructors.    

In  addition,  the  essays  were  also  analyzed  with  a  rubric  to  determine  the  level  of  competency  >75%  and  the  level  of  high  competency  >90%  for  thesis  and  coherence  measured  separately.    

Again,  each  student  essay  was  analyzed  by  at  least  two  ASU  instructors  in  an  attempt  to  produce  more  comparable  and  reliable  findings  than  were  achieved  in  fall  2012.    Thesis  and  coherence  are  central  to  the  first  element  of  the  General  Education  outcome  for  Composition  II,  that  students  can  “construct  and  deliver  a  well-­‐organized”  (emphasis  added)  oral  or  written  presentation.    

The  instructors  then  returned  to  the  ITTC  as  a  group  to  compile  their  report  of  their  findings  concerning  the  level  of  competence  Composition  II  students  achieved  in  their  end-­‐of-­‐term  argumentative  essays  in  the  areas  of  thesis,  coherence,  and  grammar/usage.    At  the  conclusion  of  their  work,  the  instructors  completing  the  training  received  certificates  as  composition  assessment  raters.      

Composition Assessment – Spring 2013

NUMBER OF ARTIFACTS GRADED – COMP I = 189; COMP II = 392 NUMBER OF STUDENTS – COMP I = 95; COMP II = 196 ASSIGNMENT – End-of-Course Paper TOOL – DEPARTMENTAL RUBRIC; Adapted from The University of Nebraska at Kearney 2010. Permission granted.

TRAINING HELD FOR GRADERS? YES GRADERS: 8 Faculty – including adjunct faculty, instructors, and professors

RESULTS – THESIS

Thesis: thesis statement and thesis development

4—The writer articulates the thesis clearly and presents cogent evidence in favor of his or her argument in every paragraph.

3—The writer states the thesis reasonably clearly—the reader does not need to guess or even to infer the paper’s thesis—and supports the argument with solid evidence and reasons. In one or two spots the evidence seems flimsy, or the argument tendentious, but overall the writer presents a careful, sound, and convincing argument.

2—The writer states one thesis but ends up arguing two or more. The argument seems rushed or perfunctory, and the evidence that the writer presents to support his/her claims is inadequate.

1—There is more than one thesis or none at all. The writer often substitutes textual summary for argumentation. S/he presents opinions rather than evidence and reasons for his/her claims, often signaled by such phrases as “I think” and “I believe.”

CONSISTENCY BETWEEN GRADERS – Do raters grade in a similar fashion? Reliability statistic should be at least .7. Raters had some trouble agreeing: Rater1 and Rater2 = .38 Rater3 and Rater4 = .63 Rater5 and Rater6 = .54 Rater7 and Rater8 = .43 Recommendation: Continue to train graders for this category; use sample artifacts

EFFECTIVENESS OF TOOL: How Well is Rubric Working? – Pretty well.

Distribution of scores (green) look different from the random distribution (orange) that one would expect if random ratings were assigned to the essays:

Actual match rate between two raters on a given artifact (yellow) was higher than the match rate one would expect by chance (orange). Percentages reflect differences in actual and random match rates.

SCORES: Summarize the ratings of the artifacts - THESIS scores from COMP II essays were statistically significantly higher than scores for COMP I essays. Many final COMP I essays contained no thesis. Are the authors of such essays passing the course? Comp I – 45% scored 3 (proficient) or higher; 40% stuck at 2; 15% received 1s

Comp II - 57% scored 3 (proficient) or higher; 31% stuck at 2; 12% received 1s

RESULTS – ORGANIZATION

Organization and Coherence:

4—Every paragraph is in its proper place, and the transitions are smooth. The argument builds methodically toward a conclusion. In addition, the sentences within each paragraph are well-articulated; there is a topic sentence where needed, followed by evidence and reasons for the claim laid out in the topic sentence. Each paragraph supports the thesis and fits integrally into the paper as a whole.

3—The paper is well organized, but the transitions do not have the finesse of those in a “4” paper. The reader can see the “rivets” that hold the paper together. Claims in paragraphs are developed but sentences could flow together more smoothly or be revised to make the point more tightly.

2—The paper has a discernible order, but it might benefit from re-ordering or reorganizing some paragraphs. Claims may lack sufficient development. The transitions are rough, some quotations are “dropped” into the text without warning or explanation, and the essay does not build organically toward a conclusion.

1—There is little discernible organization within the paragraphs in the essay as a whole and/or claims are undeveloped.

CONSISTENCY BETWEEN GRADERS: Do raters grade in a similar fashion?

Reliability statistic should be at least .7. Raters had less trouble agreeing: Rater1 and Rater2 = .54 Rater3 and Rater4 = .62 Rater5 and Rater6 = .51 Rater7 and Rater8 = .67 Recommendation: Continue to train graders for this category; use sample artifacts EFFECTIVENESS OF TOOL: How Well is Rubric Working? – Pretty well. Distribution of scores (green) look different from the random distribution (orange) that one would expect if random ratings had been assigned to the essays. 1 and 4 aren’t applied very often; however, one would expect few ratings of 1 for COMP II students:

Actual match rate between two raters on a given artifact (yellow) was higher than the match rate one would expect by chance (orange). Percentages reflect differences in actual and random match rates. Lower difference scores probably reflect little use of 1:

A grade of 4 is rarely assigned.

SCORES: Summarize the ratings of the artifacts:

Comp I – 40% scored 3 (proficient) or higher; 52% stuck at 2; 8% received 1s

Comp II - 51% scored 3 (proficient) or higher; 41% stuck at 2; 8% received 1s

Similar to findings from THESIS, COMP II essay scores were significantly higher than COMP I essays in ORGANIZATION; however, for both courses, essays received higher marks for thesis than they received for organization.

RESULTS – GRAMMAR

Grammar and Usage:

4—Grammar, spelling, and mechanics are nearly perfect. The language is clear, concise, and engaging. Sentence structures and lengths are varied with excellent command of subordinating and coordinating strategies used to create compound-complex sentence structures. The word choice is apt and precise, not overblown, clichéd, or too flowery.

3—There may be a few minor grammatical errors, but on the whole the paper is grammatically clean and correct. The grammatical and spelling errors are limited to difficult issues. The language is clear, though not elegant, economical but not quite succinct. Sentence structures show some variation and complexity. Word choice lacks the crisp appropriateness of a “4” essay.

2—There are multiple grammatical errors, but they are neither so pervasive as to slow down the reader nor so serious that the reader cannot understand what the writer is trying to say. The writer uses more words than necessary to convey the point, perhaps to pad the essay. Though most sentences are readable, little variation in sentence structure

occurs beyond some compounds formed with conjunctions; repetitive sentence patterns create “choppiness.” The writer occasionally lapses into cliché. The essay seems written in phrases, without attention to precise meaning of words or recognition of redundancy. Some words may be misused, some used in almost the right sense.

1—Sloppy grammatical, spelling, and mechanical mistakes litter the piece, interrupt the flow of reading, and make comprehension difficult. Sentences, even when grammatically correct, rarely if ever vary from simple SVO structures.

CONSISTENCY BETWEEN GRADERS – Do raters grade in a similar fashion? Reliability statistic should be at least .7. Raters had less trouble agreeing Rater1 and Rater2 = .6 Rater3 and Rater4 = .6 Rater5 and Rater6 = .47 Rater7 and Rater8 = .7

Recommendation: Continue to train graders for this category; use sample artifacts

EFFECTIVENESS OF TOOL: How Well is Rubric Working? – Pretty well.

Distribution of scores looks different from the random distribution (orange) that one would expect from random ratings assigned to the essays. Grades of 1 and 4 are not assigned often; however, one would expect few ratings of 1 for COMP II students:

Actual match rate between two raters on a given artifact (yellow) was higher than the match rate one would expect by chance (orange), except for category 4 because there were few ratings of 4. Percentages reflect differences in actual and random match rates:

Faculty should discuss whether they wish to keep 4 or make finer distinctions between 2 and 3 such that a less rigorous 4 emerges as a scoring option. Program should consider taking a close look at the percentage of 2’s and 3’s for the learning outcome called GRAMMAR for Comp I and Comp II:

SCORES: Summarize the ratings of the artifacts:

Comp I - 41.5% scored 3 (proficient) or higher; 42% were stuck at 2: 16.5% were rated 1

Comp II - 52% scored 3 (proficient) or higher; 43% were stuck at 2; 5% were rated 1

Similar to findings from ORGANIZATION, COMP II ratings for GRAMMAR were significantly higher for COMP II essays than for COMP I essays. Additionally, many final essays were graded as incoherent sentences. Are the authors of those essays passing the course?

FEEDBACK TO RATERS

Below are charts that present each rater’s grading tendencies compared with ratings given by the rater with whom they were paired. Individuals were told they could contact Dr. Josie Welsh, Director of Assessment, ABI 315, 972-2989 to confirm unique identifiers. Codes were only given only to the individual rater.

0 means you assigned the same score as your paired rater’s score

1 means you assigned one more point to an artifact than your fellow rater assigned.

-1 means you assigned one fewer point to an artifact than your fellow rater assigned.

Rater One:

Rater Two:

Rater Three:

Rater Four:

Rater Five:

Rater Six:

Rater Seven:

Rater Eight:

Feedback to Raters: Overall Average Ratings Given to Essays

Descriptive Statistics

Measure Rater M SD N

THESIS POINTS One 2.56 .55 75

Two 2.26 .95 74

Three 2.72 .90 76

Four 2.17 .79 76

Five 2.86 .67 70

Six 2.88 .90 69

Seven 2.26 .97 70

Eight 2.49 .91 71

Total 2.52 .88 581

ORGANIZATION POINTS

One 2.52 .64 75

Two 2.18 .83 74

Three 2.62 .89 76

Four 2.18 .76 76

Five 2.79 .61 70

Six 2.74 .70 69

Seven 2.36 .90 70

Eight 2.59 .77 71

Total 2.49 .80 581

GRAMMAR POINTS One 2.53 .64 75

Two 2.50 .82 74

Three 2.46 .93 76

Four 2.34 .62 76

Five 2.66 .61 70

Six 2.38 .81 69

Seven 2.53 .79 70

Eight 2.32 .79 71

Total 2.46 .76 581

TOTAL POINTS One 7.61 1.55 75

Two 6.93 2.24 74

Three 7.80 2.53 76

Four 6.70 2.02 76

Five 8.30 1.64 70

Six 8.00 2.14 69

Seven 7.14 2.49 70

Eight 7.41 2.11 71

Total 7.48 2.16 581

Findings:  Conclusions  reached  from  this  study  led  the  assessment  team  to  reach  the  following  conclusions:    

• Scores  improve  from  Comp  I  to  Comp  II  • Data  should  be  reviewed  by  group  to  inform  decisions  • Rubric  should  be  used,  and  people  should  be  trained  to  use  it  • Perhaps  upper  level  classes  should  use  it  to  reach  “4”  rating  

 Proposed  Future  Actions:  All  of  this  data  will  be  presented  to  the  Composition  Committee,  the  incoming  Composition  Director,  and  the  English  faculty  and  Composition  instructors,  to  help  shape  and  improve  the  teaching  of  composition  at  Arkansas  State  University.  

Institutional  Response  and  Implementation:  The  data  was  presented  to  the  Composition  English  faculty  and  Composition  instructors  at  the  first  Department  meeting.  The  data  was  also  recently  shared  with  the  new  Writing  Program  Director,  Dr.  Costello  (August  2013)  who  is  in  the  process  of  developing  practices  that  address  the  deficiencies  noted  in  the  assessment  (as  discussed  above:  common  rubric,  grade-­‐norming  among  instructors  and  in  the  classroom,  etc.)  and  an  assessment  project  that  will  assess  future  improvement  in  ASU’s  Composition  I  and  II.  

 

Composition  I  and  II  Fall  2012-­‐2013  Assessment  Summary The  Composition  Program  at  Arkansas  State  University  has  been  without  a  director  for  several  years,  so  while  some  artifacts  have  been  gathered,  these  have  not  been  analyzed  and  systematic  assessment  has  not  occurred  prior  to  Fall  2012.    In  Fall  2011  the  Department  of  English  &  Philosophy,  realizing  this  problem,  obtained  permission  from  the  then-­‐interim  provost  to  search  for  a  twelve-­‐month  director  of  composition,  with  fifty  percent  teaching  responsibilities  fall  and  spring  (two  courses  each  fall  and  two  each  spring)  and  fifty  percent  administrative  responsibilities  to  help  build  a  stronger  writing  program.  This  new  director  of  composition  has  been  tasked  with  bringing  greater  consistency  and  effectiveness  to  first-­‐year  writing  courses  and  supervising  graduate  students  who  staff  the  Learning  Center  Writing  Lab.