29
DRAFT REPORT ON THE ASSESSMENT OF DCC INSTITUTIONAL LEARNING OUTCOME 6: CRITICAL THINKING January 2016 Submitted by Stephanie Roberg-Lopez

DRAFT REPORT ON THE ASSESSMENT OF DCC …...hypothesis without sound reasoning or demonstration of proficiency. > There are inconsistencies in reasoning or interpretation. Student

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

DRAFT REPORT ON THE ASSESSMENT OF DCC

INSTITUTIONAL LEARNING OUTCOME 6: CRITICAL THINKING

January 2016

Submitted by Stephanie Roberg-Lopez

2

Table of Contents

Introduction ................................................................................................................................ 3

A. Assessment Methodology ...................................................................................................... 3

B. Classes Assessed ...................................................................................................................... 8

C. Results by Class ....................................................................................................................... 9

D. Statistical Analysis ................................................................................................................. 18

E. Summary and Conclusions .................................................................................................. 25

F. Recommendations ............................................................................................................... 27

3

INTRODUCTION

Critical Thinking, as the foundation of a college education, is the most powerful asset any student may

develop during her or his years of study. Information and skill based learning are of enormous

importance in Community Colleges today, however if we consider the well-known saying “Give a man

a fish and you feed him for a day; teach a man to fish and you feed him for a lifetime”, we have a

meaningful metaphor for the value and importance of teaching critical thinking to our 21st century

Community College students. Countless studies have shown that college graduates with a strong

critical thinking background show increased success finding jobs and experiencing success as workers

and citizens.

Today’s college students live in an electronic universe where prepackaged statements, ideas and images

dominate as information resources. In a way, these students are challenged more than any group of

students before, when it comes to sorting out fact from a sea of fiction, distortion and spin. Teaching

critical thinking may now be more essential than it has ever been in the past, by equipping our

graduates with the tools to formulate or evaluate arguments, problems or opinions and arrive at a

solution, position or hypothesis based on carefully considered evidence.

A. ASSESSMENT METHODOLOGY

The ISLO 6 Critical Thinking Assessment Committee included faculty and administration volunteers

drawn from across the campus community. Through a series of meetings and individual classroom

assessments, the Critical Thinking committee developed a methodology to evaluate a representative

sample of students across the disciplines.

The goals of the assessment were clearly defined as; gaining an accurate understanding of where DCC

students stand in terms of their ability to apply critical thinking to specific questions/situations,

evaluate the findings of the critical thinking assessments, and make constructive and meaningful

recommendations on how critical thinking may be taught more effectively to our student community.

With a solid empirically based body of both quantitative and qualitative results from the assessment,

the final step in the ISLO 6 assessment process will be to share our findings across the campus

community, and as a group, develop meaningful and concrete responses to what we have learned.

The first meeting of the Critical Thinking Assessment Committee (hereafter, the CTAC) took place

on October 8, 2015. The meeting was attended by faculty and administrators who represented a broad

range of disciplines across the campus. The group focused on the DCC ISLO 6 definition of Critical

Thinking; “Students will formulate or evaluated arguments, problem or opinions and arrive at a

solution, position or hypothesis based on carefully considered evidence.”

Through a broad ranging and productive discussion, the group focused on three primary weaknesses

exhibited across the DCC student body. They can be summarized as:

Students think simplistically and rely heavily on opinion.

Students have difficulty reading and thinking critically.

Students have difficulty identifying and applying factual evidence to questions.

The second meeting of the CTAC took place on October 30, 2015. During this meeting, the

Committee developed a methodology for assessing critical thinking across the disciplines. The first

task involved a discussion and analysis of a number of definitions of Critical Thinking that resulted in

the CTAC confirming the validity of the DCC definition of Critical Thinking. The second task

4

involved breaking the elements of critical thinking into three basic skill sets: 1. Can the Student

formulate or evaluate arguments, problems, or opinions? 2. Can the student arrive at a solution,

position, or hypothesis? 3. Does the student use carefully considered evidence? To assess student

competence in these three skills, the committee created an ISLO 6 Critical Analysis and Reasoning

Rubric. The three analytic criteria are assessed on four levels of achievement; exceeds the standard,

meets the standard, is developing skills and does not meet the standard. Specific assessment criteria

for each of these skill levels are presented in the rubric. This rubric, along with instructions for

assessment and reporting, was distributed to Committee members and faculty whose classes had been

identified for the assessment process, as a Critical Thinking Assessment Guide. (See Below)

The classes selected for assessment varied widely in subject, ranging from Aviation Science to Social

Problems in Today’s World to Immunohematology/Serology. The Committee concluded that a “One

Size Fits All” standardized assessment would not address the critical thinking nuances attached to

specific disciplines. Therefore, the assessment instrument for each class was developed by the

instructor, or in the case of classes with multiple sections being assessed, the instructors. The results,

however, were reported in a standardized submission that included a completed rubric and a narrative

explaining the assessment instrument and analyzing the results. The instructions and reporting

instrument are presented below. For classes assessed in multiple sections, the results are reported as

an aggregate.

A third meeting was held on January 14, 2016 during which time the CTAC considered the results of

the assessment and contributed input to the summary, conclusions and recommendations. This

information was incorporated into the final draft of the report.

5

Critical Thinking Assessment Guide

For Faculty completing a classroom level assessment.

We appreciate your collaboration in completing the assessment of the DCC Critical Thinking ISLO as we prepare our campus-wide response to the Middle States mandate.

Each Department has developed an assessment tool specific to their discipline. These tests or assessments will be administered by faculty in those departments. The Committee assessing Critical Thinking will analyze the results submitted by each classroom instructor and submit a report on those results, which will be made available to the campus community.

Although assessment tools will vary widely from department to department based on the discipline, the reporting template is the same for each class assessed. We have attached the Critical Thinking Assessment Rubric developed by our committee, and ask that each instructor’s assessment results be reported as explained below. We will need both a quantitative result and a short narrative for each classroom assessment administered.

Although we will not be considering the personal information for each student assessed, we request that each instructor retain and archive the following documents:

The student roster for the class, indicating who was present for the test on that given day.

A copy of the assessment instrument.

The completed tests.

Again, we will not be including student academic or demographic data in our report, however the information should be archived in the event that Middle States requests it.

The quantitative method for reporting assessment results is explained below.

Thank you for your help in completing this important assessment.

6

Institutional Student Learning Outcome #6: Critical Thinking

Definition of the ISLO: “The formulation or evaluation of arguments, problems, or opinions in order to arrive at a solution, position, or hypothesis based on carefully considered evidence as appropriate to the discipline being assessed.”

4 = Exceeds 3 = Meets 2 = Developing 1 = Does not Meet Standard

Key words: comprehensively accurately Partially/ inconsistent/ attempts to

does not

1. Can the student

formulate or evaluate

arguments, problems, or opinions?

Student formulates or evaluates an argument, problem, or opinion

comprehensively. Issue/ problem is stated clearly

and described comprehensively, delivering

all relevant information necessary for full understanding.

Student formulates or evaluates arguments,

problems, or opinions accurately

so that understanding is not seriously impeded by

omissions.

Student formulates or evaluates an

argument, problem, or opinion only

partially. Description leaves key concepts

undefined and ambiguous.

Student does not identify, formulate, or

evaluate appropriate arguments,

problems, or opinions.

2. Can the student arrive at a solution, position, or hypothesis?

> Specific solution, position, or hypothesis takes into account the

complexities of an issue. Limits of the solution,

position or hypothesis are acknowledged. Others’

points of view are synthesized.

> Conclusions and related outcomes are logical and

accurate.

Identifies or presents specific solution,

position, or hypothesis and/or

recognizes the different sides of an issue without further

development.

> Conclusions and related outcomes are logical and accurate, with minor flaws.

Identifies or describes a specific solution,

position, or hypothesis without sound reasoning or demonstration of

proficiency.

> There are inconsistencies in

reasoning or interpretation.

Student demonstrates limited

understanding.

Student does not accurately

arrive at an appropriate

solution, position, or hypothesis.

For the critical thinking rubric, we have identified three separate elements that, combined, provide a snapshot of our students’ critical thinking across academic disciplines. The assessment rubric below provides guidance on the standards for each of these aspects, and the levels of comprehension identified.

For the quantitative aspect of the assessment, we are asking that all instructors fill out and submit the rubric below with numbers only. Instructors should, as part of their short narrative provided with the critical thinking assessment rubric, indicate the following:

The number of students in the entire course

The number of students assessed against the critical thinking rubric (i.e. the sample size)

7

When instructors fill out the rubric, then, we are looking for only a number within the boxes of the rubric itself. These numbers, when added horizontally in each of the three rows, should equal the sample size for the course being assessed.

4 = Exceeds 3 = Meets 2 = Developing 1 = Does not Meet Standard

Key words: comprehensively accurately Partially/ inconsistent/ attempts to

does not

1. Can the student formulate or evaluate

arguments, problems, or opinions?

[number of students assessed at

this level]

[number of students assessed at

this level]

[number of students assessed

at this level]

[number of students assessed

at this level]

2. Can the student arrive at a solution, position, or

hypothesis?

[number of students assessed at

this level]

[number of students assessed at

this level]

[number of students assessed

at this level]

[number of students assessed

at this level]

3. Does the student use carefully considered

evidence?

[number of students assessed at

this level]

[number of students assessed at

this level]

[number of students assessed

at this level]

[number of students assessed

at this level]

For example, if the course being assessed were GOV 121, the narrative might indicate that there are 200 students registered for the course in all sections, and of those 200, 50 were assessed against the critical thinking rubric. When filling out the rubric, the final result might look like this:

4 = Exceeds 3 = Meets 2 = Developing 1 = Does not Meet Standard

Key words: comprehensively accurately Partially/ inconsistent/ attempts to

does not

1. Can the student formulate or evaluate

arguments, problems, or opinions?

15 25 7 3

2. Can the student arrive at a solution, position, or

hypothesis?

8 24 12 6

3. Does the student use carefully considered

evidence?

11 21 9 9

The supporting short narrative, then, will provide a brief synopsis of the instructor’s analysis of the results. The above format will provide us with a consistent baseline of data, from which we can find ways to comprehensively examine critical thinking across all departments. If additional information is needed, the data that we are asking instructors to maintain (see above) should be able to provide any further support needed to augment the rubric data.

In addition to the data requested above (the student roster for the class, a copy of the assessment instrument, and the completed tests), we recommend that instructors maintain a roster for each of each assessed student’s performance within this matrix. Such a matrix might look like this:

8

1. Can the student formulate or evaluate arguments, problems, or opinions?

2. Can the student arrive at a solution, position, or

hypothesis?

3. Does the student use carefully considered

evidence?

Student Name:

Score (4, 3, 2, 1) Score (4, 3, 2, 1) Score (4, 3, 2, 1)

Student #1 3 2 3

Student #2 4 3 3

Student #3 2 2 2

Student #4 2 1 2

Student #5 3 3 2

B. CLASSES ASSESSED

The classes selected for assessment are the following:

AVI-110 Aviation Law

BHS 103 Social Problems in Today’s World

BUS 215 Business Law I

CPS 231 Data Structures

HIS 103 History of the United States I

ELT 213 Electromagnetism & Motors

ELT 250 Electronics Project Laboratory

GOV 121 American National Experience

MLT 106-010 Immunohematology/Serology

PAL 210 Family Law

9

C. RESULTS BY CLASS

AVI-110 Aviation Law – 12 Students Assessed

Observation and Recommendations

I was somewhat surprised by the simple answers of a few of the students. We discussed the law and

the desired outcomes during the lecture, open discussion in the classroom ensued and each student

began formulating hypothesis and opinions. I did not have a follow up plan to personally sit and

discuss the answer and results with each student. Did the student formulate a hypothesis but was

unable to communicate it in writing? I will during the next assessment of Critical Thinking include a

follow-up meeting to discuss the test and the answers with each student with special emphasis on

those assessed at Developing or lower skill.

Aggregate results

4 = Exceeds 3 = Meets 2 = Developing 1 = Does Not Meet Standard

Keywords Comprehensively Accurately Partially/Inconsistent/ Attempts To

Does Not

3.0 1. Can the student formulate or evaluate arguments, problems, or opinions?

2 8 2 0

3.08 2. Can the student arrive at a solution, position, or hypothesis?

3 7 2 0

3.16 3. Does the student use carefully considered evidence?

1 10 1 0

10

BHS 103 – Social Problems – 173 Students Assessed

4=Exceeds 3=Meets 2=Developing 1=Does not

Meet Standard

Key Words: comprehensively accurately Partially/inconsistent/

attempts to

does not

1. Can the student formulate or

evaluate arguments, problems, or

opinions

72 52 27 22

2. Can the student arrive at a

solution, position, or hypothesis?

49 84 34 6

3. Does the student use carefully

considered evidence?

49 76 34 14

The majority of our Social Problems students met or exceeded the standards for all three criteria of critical thinking. Students who failed or only partially met the standards for the three facets had the most difficulty with criterion one--the ability to formulate, evaluate arguments, problems, or opinions. The question used to assess criterion one is qualitatively different from the questions for criteria two and three. Question one required broadly applying the sociological perspective to a common/popular claim put forth in our society concerning a significant social problem, whereas questions two and three focused on explaining and analyzing a particular phenomenon. Faculty might focus on reinforcing the relationship between the broader conceptualization of the sociological perspective with more focused attempts to formulate hypotheses or provide explanations for specific phenomena.

11

BUS 215 - Business Law I – 158 Students Assessed

Business Law I--Exam #2--Trademark Dilution Case Problem

4=Exceeds 3=Meets 2=Developing 1=Does not

Meet Standard

Key Words: comprehensively accurately inconsistent/ attempts to

does not

1. Can the student formulate problem?

55 52 37 14

2. Can the student arrive at a solution?

73 40 33 12

3. Does the student use carefully considered evidence?

26 32 60 40

Observation and Recommendations

It should be noted that in legal analysis, it is important that the legal issue that is central to the case be

clearly identified and thoroughly examined. The rubric summaries appear to indicate that the

thoroughness of the examination of the legal issues is often lacking for many students. Furthermore,

the correct legal conclusion often lacked application of the correct legal principles. It is noteworthy

that these cases involved material which was discussed in class prior to the exams, using examples of

the situations where the legal principles would apply. These aspects of the law were also discussed in

the course textbook. However, on the second case problem a “New York exception” came into play.

This exception was discussed in class (using an illustrative example), but was not addressed in the

textbook.

Business Law I is a course that contains a very extensive amount of material and specialized

terminology. Experience has shown that it is a course that is very challenging to many students. It is

also a course where class attendance, extensive in-class note taking, and effective study habits are

essential for student success. Student deficiencies in these aspects likely also accounts for the number

of students that were either “developing” or “does not meet standard” in the assessment.

12

CPS 231 – Data Structures – 27 Students Assessed

4=Exceeds 3=Meets 2=Developing 1=Does not

Meet Standard

Key Words: comprehensively accurately Partially/inconsistent/

attempts to

does not

1. Can the student formulate or

evaluate arguments, problems, or

opinions

14 0 0 0

2. Can the student arrive at a

solution, position, or hypothesis?

12 2 0 0

3. Does the student use carefully

considered evidence?

11 3 0 0

To assess the Critical Thinking skills of CPS 231 students, as part of their final exam, they were

given a real-world situation and asked what data structure they would implement to deal with that

situation, including the rationale for their choice and analysis of the efficiency of the structure.

The students did extremely well, which I expect from third semester CPS students. Since all students

met or exceeded the outcomes, no changes to the course or program seem necessary at this time. In

the future, a more challenging assessment instrument may be worth trying.

13

ELT 213 - Electromechanical Devices – 14 Students Assessed

* ELT 213 results are included in the descriptive statistics, but not in the parametric statistics.

Measure

4=Exceeds 3=Meets 2=Developing 1=Does not

Meet Standard

Key Words: comprehensiv

ely accurately

inconsistent/ attempts to

does not

1. Can the student formulate problem?

Final Exam, Question 3 2 8 2 1

2. Can the student arrive at a solution?

Motors Quiz, Question 8 0 9 2 3

3. Does the student use carefully considered evidence?

Pneumatics Lab Report, Question K1 3 2 4 0

14

ELT 250 – Electronics Project Laboratory – 13 Students Assessed

4=Exceeds 3=Meets 2=Developing 1=Does not Meet

Standard

Key Words: comprehensively accurately inconsistent/ attempts to

does not

1. Can the student formulate problem?

3 9 1 0

2. Can the student arrive at a solution?

6 5 2 0

3. Does the student use carefully considered evidence?

3 7 3 0

Observations and Recommendations

The students who are “developing” most often are in that state because of one of two reasons: lack

of personal motivation or lack of resources. Their lack of resources usually means that they are

working too many hours outside of school to be able to dedicate the necessary time to their school

work to be fully successful.

More needs to be done to make sure that students are providing proper citations to their sources.

Although I “reteach” them how to provide proper citations, it appears that they are not required to

do this consistently throughout their college coursework.

15

GOV 121 American National Experience – 143 Students Assessed

4 = Exceeds 3 = Meets 2 = Developing 1 = Does not meet standard

Comprehensively Accurately Partially/inconsistent/attempts to

Does not

Can the student formulate or evaluate arguments, problems, or opinions

10 17 23 22

Can the student arrive at a solution, position, or hypothesis?

11 19 24 18

Does the student use carefully considered evidence?

11 16 17 28

Observations and Recommendations

Many students who took the course in their first semester and it is, thus, not surprising that the

majority of students do not meet the standard on the three levels on which they were evaluated.

Students often failed to give concrete examples to support their positions, if they formulated them.

Poor writing skills frequently contributed to unclear descriptions of the theories and their explanatory

power. It is not all that surprising that reading and writing skills are not up to par, as many of the

students in these sections are doing their first semester of college work and many are in remedial

courses for reading and writing. This does show the inherent relatedness and importance of

developing reading, writing, and analytical skills simultaneously so that students can effectively

communicate what they know.

The approach to the assessment based on a question in the final exam has clear limitations. Notably,

the lack of a baseline makes it impossible to evaluate the impact of the semester and, specifically, of

the course on the development of students’ critical thinking skills. In order to address these

shortcomings, an assessment of GOV 121 scheduled for the spring 2016 semester will take a two-step

approach to evaluate students’ critical thinking skills early in the semester and at the end of the

semester.

16

HIS 103: History of the United States I – 25 Students Assessed

* HIS 103 results are included in the descriptive statistics, but not in the parametric statistics.

  4=Exceeds 3=Meets 2=Developing 1=Does not Meet

Standard

Key Words: comprehensively accurately inconsistent/ attempts to

does not

1. Can the student formulate problem?

5 8 9 3

2. Can the student arrive at a solution?

3 12 8 2

3. Does the student use carefully considered evidence?

3 11 9 2

As an introductory course, it is not surprising that a small percentage of students achieved the “exceeds standards” mark for the three measures of assessment. There were some general trends to be seen throughout the course.

Students achieved mixed results in their attempts to demonstrate links between their evidence. The evidence was presented more in a “fact-centric” manner: this happened, then this happened. The analytical foundations of what drew these facts and events together was absent in many cases. While some reasons for this challenge rest in the fact that this is an introductory course, HIS 103 certainly provides an opportunity to develop students’ logical approach to a greater degree, using historical evidence and situations.

17

MLT 106 – Immunohematology/Serology – 14 Students Assessed

4=Exceeds 3=Meets 2=Developing 1=Does not Meet

Standard

Key Words: comprehensively accurately inconsistent/ attempts to

does not

1. Can the student formulate problem?

13 0 1 0

2. Can the student arrive at a solution?

12 1 1 0

3. Does the student use carefully considered evidence?

7 6 1 0

Observations and Recommendations

Most of the students, who scored with a 3 under considering evidence, failed to identify that the

screening cells were negative for the corresponding antigen. Their responses were typically biased by

a teaser in the description stated in the patient history. The student who was identified with the 2

rating in each category partially misevaluated the laboratory data, which then resulted in a

misinterpretation of the antibody.

18

PAL 210 – Family Law – 13 Students Assessed

4 = Exceeds 3 = Meets 2 = Developing 1 = Does Not Meet Standard

Key Words: Comprehensively Accurately Partially/Inconsistent/

Attempts To Does Not

1. Can the student formulate or evaluate arguments, problems, or opinions?

4 3 5 1

2. Can the student arrive at a solution, position, or hypothesis?

4 4 4 1

3. Does the student use carefully considered evidence?

3 5 5 0

Observations and Recommendations PAL 210 is a course where the main focus is on the content of the course, and there is not a lot of time to spend on development of writing and critical analysis skills. In the future, it would be more useful to assess this ISLO by comparing results between writing in one of the gateway courses (PAL 110 or PAL 120) and the legal writing course (PAL 260), since the legal writing course focuses on the development of skills crucial to the critical thinking process in the context of legal analysis. PAL 260 was not used for this assessment because the class was cancelled for the fall 2015 semester due to low enrollment.

19

D. STATISTICAL ANALYSIS

Statistics speak to us only when we have asked them carefully formulated questions. Randomly run

data produces random and relatively useless results. Therefore, a series of goals were developed to

maximize the useful information available from the assessment data collected in this exercise.

Our first goal was focused on the possibility that different types of courses yielded different

performances on the three critical-thinking criteria.

The first step toward that goal was to see what different types of courses were assessed. It seemed

natural to test to see if large courses differed from small ones, therefore the first data set addressed

how many students were surveyed in each course. Three of the courses had more than 50 survey

responses, while all others had fewer than 20 respondents. On this basis, courses were separated into

groups of "Large" and "Small". Another natural distinction was between introductory and advanced

courses. To quantify this, we examined how many prerequisite and concurrent courses were required

for each of the courses in the study, and then added the credits for all of those together to find the

"Total Credits Required" for each course. See Table 1 for those data.

Two of the courses had no prerequisites, so they were assigned to the "Early" course type. Three

courses had some prerequisites, but required fewer than 10 total credits, so they were placed in the

"Middle" course type. Finally, three courses represented more than 10 credits, so they were placed in

the "Advanced" course type.

It was expected that more advanced courses would have fewer students, and we did indeed see that

pattern (figure "size and credits.pdf"). BUS 215 was slightly anomalous because it was the only course

that was both large and required prerequisites. So, the first conclusion is that more classes with that

student profile should be include in future Assessments.

The second step in comparing performance across courses was to define performance according to

the surveys submitted by faculty. The rubric contained space for a mean course achievement score,

based upon valuing the four levels of achievement from 1 (lowest) to 4 (highest). That data was used

as one measure of performance. Another data set in the analysis that we considered, was the

percentage of students who met or exceeded the standards. The final measure of performance

calculated is a measure of how variable student achievement was. Large values mean that students

were found at each level of achievement, while small values mean that student achievement was

concentrated in a few, or one level.

Once the types of performance were defined, we were able to examine whether there were differences

in performance across the different combinations of course Size, Type, and critical thinking criteria.

All of the figures that present this group of results use bar graphs where the height is the mean score

and the small vertical lines represent standard errors of the mean.

The first question tested was if there were performance differences between course sizes across

different critical-thinking criteria. For all three performance measures, there was a trend that students

in Large classes performed differently than those in Small ones, even though there were no differences

among critical-thinking criteria. In the case of Mean Score and Percent Success, Large classes had

lower performance, but it was not statistically significant. Large classes also had greater variability

among their students than the Small classes did, and that difference was marginally significant.

20

We next tested if there were performance differences between course types across different critical

thinking criteria. These tests had to be done within the different Size categories because there were no

Large Late classes and there were no Small Early classes. Accordingly, all of the figures are labeled

either "Large" or "Small", and then labeled with the performance measure that they portray.

The students in the Small courses showed significant differences between Middle and Late courses in

how they performed on the different critical thinking criteria. Students in the Middle courses

performed better (according to Percent Success) and less variably on forming Arguments than on

using Evidence. In contrast, students in the Late courses did better (according to Percent Success),

and less variably, on using Evidence than on forming Arguments. This is complicated because it is

necessary to keep track of Size, Type, and Criterion at the same time in order to see a valid pattern.

There was only one Large course in the Middle Type, so the tests of performance in the Large courses

are a little easier to interpret. In short, the students in the Business Law class had very low scores on

the using evidence criterion, both compared to the other criteria in their class, and compared to any

of the criteria in the Large, Early classes. The forming conclusions criterion also showed less variation

than the other criteria, an effect that was small, but significant.

The information provided by this analysis does give us some potentially very valuable data in moving

forward. Clearly, such factors as a student’s number of semesters at DCC, whether a class is a 100 or

200 level class and whether the class has prerequisites must be factored into both the cohort assessed

and the design of the assessment instrument. It may be more valuable, in analytical terms, to test

homogenous cohorts, and to consider the value of pre and post testing of, if not the same individual

students, at least the same general population.

Figure 1: The relationship between enrollment and

prerequisites among the nine courses in this study.

The horizontal axis represents the number of credits

earned by taking a course and all of its prerequisites.

The vertical axis shows the number of students in

each course that participated in the study. The total

of all students is shown for courses that had more

than one section and/or instructor. Symbols

designate the Type that each course belongs to, with

circles, plus signs, and x's denoting Early, Middle,

and Late, respectively. There is a significant negative

correlation between course size and total credits (r =

-0.765, t7 = -3.14, p = 0.0163). All of the Early

courses were Large (had at least 50 students), and all

of the Late courses were Small (had more than 20

students). BUS 215 is unusual because it is the only

Large course that had any prerequisites.

21

Figure 2: Mean achievement scores at each critical-

thinking Criterion (dark: constructing an Argument,

medium: reaching a Conclusion, light: using

supporting Evidence), grouped by course Size (Small

if fewer than 50 students, Large otherwise). The

vertical lines at the top-center of each bar show the

standard error of the means. There are no significant

differences in mean achievement score among any

combination of course Size or critical-thinking

Criterion.

Figure 3: Percent of students that met or exceeded

the critical-thinking standard in each Criterion,

grouped by course Size. For an explanation of the

figure format, see Figure 2. Large courses showed

marginally lower success rates for the using Evidence

critical-thinking Criterion (effect = -18.41% +/-

11.5, t8 = -1.61 p =0.0733).

Figure 4: Variability of student outcomes within each

critical-thinking Criterion, grouped by course Size.

For an explanation of the figure format, see Figure 2.

Variability is defined as the Shannon entropy of the

frequencies of achievement levels. Large courses had

significantly greater variability than Small ones (effect

= 0.548 +/- 0.244, t8 = 2.45, p = 0.0199). This makes

sense because Large courses have a greater range of

student motivation and ability, while students in

Smaller, more focused courses are drawn from a

smaller, more focused population.

22

Figure 5: Mean achievement scores of Large courses,

grouped by critical-thinking Criterion (dark: constructing

an Argument, medium: reaching a Conclusion, light:

using supporting Evidence) and course Type (Early if no

prerequisites, Middle otherwise). The vertical lines at the

top-center of each bar show the standard error of the

means. Students scored significantly lower on the using

Evidence than on the forming Arguments Criterion

(effect = -0.607 +/- 0.0636, t8 = -9.56, p < 0.001). In the

Middle course, students had significantly higher

achievement scores on the drawing Conclusions

Criterion (effect = 0.285, t8 = 3.18, p = 0.007), and lower

scores on the using Evidence Criterion (effect = -0.645

+/- 0.089, t8 = -7.17, p < 0.001).

Figure 6: Percent of students that met or exceeded the

critical-thinking standard in Large courses. For an

explanation of the figure format, see Figure 5. A

significantly lower percentage of students met the

standard for using Evidence than on the forming

Arguments Criterion (effect = -30.0% +/- 5.58, t8 = -

5.38, p < 0.001), and this result was more pronounced

for Large course with prerequisites (effect = -30.01 +/-

7.89, t8 = -4.38, p = 0.001).

Figure 7: Variability of student outcomes within Large

courses. For an explanation of the figure format, see

Figure 5. There was significantly variation in student

achievement at forming Conclusions than on forming

Arguments (effect = -0.136 +/- 0.0312, t8 = -4.35, p =

0.001).

23

Figure 8: Mean achievement scores of Small courses,

grouped by critical-thinking Criterion (dark:

constructing an Argument, medium: reaching a

Conclusion, light: using supporting Evidence) and

course Type (Middle if the course and its

prerequisites total fewer than 10 Total Credits, Late

otherwise). The vertical lines at the top-center of

each bar show the standard error of the means.

Students in the Late courses had achievement scores

that were marginally higher than those of students

in Middle courses (effect = 0.410 +/- 0.297, t8 =

1.38, p = 0.102).

Figure 9: Percent of students that met or exceeded

the critical-thinking standard in Small courses. For

an explanation of the figure format, see Figure 8. A

marginally significantly greater percentage of

students in Late courses met standards than in Early

courses (effect = 15.5% +/- 10.2, t8 = 1.53, p =

0.083), but this change was significantly less

pronounced for the using Evidence Criterion (effect

= -12.2% +/- 4.38, t8 = -2.78, p = 0.012).

Figure 10: Variability of student outcomes within

Small courses. For an explanation of the figure

format, see Figure 8. There was marginally less

variation in student achievement in Later courses

than in Early ones (effect = -0.380 +/- 0.212, t8 = -

1.79, p = 0.055), except for the using Evidence

Criterion, for which variation in outcome was just as

high in both Middle and Late courses (effect = 0.417

+/- 0.134, t8 = 3.12, p = 0.007).

24

Aggregate Assessment Results

Another approach to the interpretation of the quantitative results of the ISLO 6 assessment is to

interpret statistics on competency in individual criteria and an overall aggregate of competency in all

three criteria. The rubric below summarizes overall performance.

Course # Surveyed Prerequisites Attributes Credits Prerequisite Credits

AVI 110 12 AVI 100 3 1

AVI 102 3

BHS 103 132 none Gen Ed C,

Elective

3

BUS 215 79 BUS 102 / BUS 104 / PAL 120 Elective 3 3 / 3 / 3

CPS 231 14 CPS 141 3 4

CPS 142 3

MAT 184 3

ELT 213 12 ELT 105 3 3

ELT 106 3

MAT 184 3

ELT 250 14 ELT 105 1 3

ELT 106 3

ELT 108 3

ELT 115 3

ELT 218 3

MAT 184 3

GOV 121 143 none Gen Ed D,

Elective

3

MLT 106 14 MLT 101 3 4

MLT 105 4

MLT 202 3

PAL 210 13 PAL 110 3 3

PAL 120 3

4=Exceeds 3=Meets 2=Developing 1=Does not Meet

Standard

Key Words: comprehensively accurately inconsistent/ attempts to

does not

1. Can the student formulate problem?

37% (213 students) 32% (188 students) 20% (119 students) 11% (62 students)

2. Can the student arrive at a solution?

35% (201 students) 34% (196 students) 24% (138 students) 8% (48 students)

3. Does the student use carefully considered evidence?

26% (148 students) 31% (180 students) 27% (158 students) 16% (92 students)

25

Criterion One: Can the student arrive at a solution, position or hypothesis?

A total of 69% of the students assessed

demonstrated that they met or exceeded

the criterion to formulate the problem.

This suggests an acceptable rate of

achievement.

Criterion Two: Can the student formulate or evaluate arguments, problems, or opinions?

A total of 69% of the students assessed

demonstrated that they met or exceeded

the criterion to arrive a solution. This

suggests an acceptable rate of

achievement.

Criterion Three: Does the student use carefully considered evidence?

A total of 57% of the students assessed

demonstrated that they met or exceeded

the criterion to use carefully considered

evidence. This suggests the need for an

action plan.

26

E. SUMMARY AND CONCLUSIONS

Sean Robinson, in his much cited paper “Teaching Critical Thinking at the Community College”, states that “Information that is significant in finding an acceptable resolution to a problem must be distinguished from that which is unusable, trivial, and extraneous.” This is a skill we strive to develop in our students, however no truer words could be said about our task of assessing critical thinking. We have drawn important and informative information about critical thinking at DCC from two main sources; statistical analysis and the reflections and conclusions of the teachers who created and administered each of the assessment instruments. The pie charts above present the aggregate mastery of critical thinking by DCC students through time. We can be reassured by the increasing percentage of the “pie” representing the combined green and blue “meets” and “exceeds” standards as our students move toward completion of their DCC degrees. As the red and yellow sections decrease in size, we can be confident that fewer and fewer of our students are failing to meet the critical thinking benchmark. The statistical analysis of the results of our study suggest that future critical thinking assessment instruments can and should be refined by student academic year and by 100 versus 200 level courses.

27

Capping courses and vocational courses might be segregated from other course sets when a large representative sample is desired. The aggregate performance of our students suggests that on criteria One and Two, overall, our students performed at an acceptable statistical level. The results also indicate that of the three criteria, number three indicates a statistically significant lower rate of achievement. This suggests that our students could show considerable improvement in using carefully considered evidence. With this clear weakness in performance, it becomes possible for DCC to address teaching this skill in a more robust and effective way across the disciplines. Future assessments of critical thinking at DCC would likely increase in effectiveness, that is, the degree to which teaching critical thinking at DCC can be improved, if the questions and rubrics are designed to be more “statistically friendly.” With focused goals, these statistics can be mined for more useful applied information. Based on the information we have acquired from this Critical Thinking assessment exercise, it is almost certain that more accurate, focused and informative results can be achieved by DCC in further assessments if we incorporate what we have learned from this first experience and share our findings across the campus wide community.

28

F. RECOMMENDATIONS

The evidence presented in this report is the basis on which the Critical Thinking Assessment Team developed the following recommendations for continued improvement in the successful teaching of Critical Thinking at Dutchess Community College. Among the possible recommendations discussed by the Committee were the use of a “Global Assessment” purchased from an outside agency, a “Global Assessment” developed by the DCC academic community to be used across all disciplines at DCC and pre and post testing. The Committee arrive at the following positions on these issues:

Regarding the use of a “global” assessment

A “global” exam, administered either by an outside agency or internally by the College, would not result in useful information regarding the development of critical thinking skills in the College’s programs. Mostly, this belief derives from the fact that what critical thinking “looks like” is rather discipline-specific; the rubric created for this assessment activity states a broad definition of critical thinking agreed upon by faculty from many disciplines, but that same rubric was put to use through different tools. We feel the results of that assessment are more authentic for the kind of information we wish to obtain regarding the critical thinking skills of the students in our programs.

Regarding pre-and post-tests

Pre- and post-testing raises any number of issues for our programs. First, attrition rates within particular programs would need to be addressed, as for larger programs (such as LAH), a number of students might enroll and complete courses typically taken early in the program (such as BHS 103 or ENG 101), but not continue in the program through to the 200-level courses taken towards the end of that program. Also, the students taking those 200-level classes have, by design, been brought to that point by unofficial gate-keeping procedures (i.e., being able to pass the lower-level classes, and therefore exhibiting the ability to meet certain learning outcomes), so assessment results will necessarily be impacted by that change in the population of participants. However, for a sense of the impact individual programs have on critical thinking skills, a future assessment might look to specifically include courses taken in the first semester of a program (ENG 101 or BHS 103, for instance) and those taken later (again, the 200-level or capstone courses within a program) in order to see if any potential differences in the data might reveal information helpful in determining the overall effectiveness of the program in meeting the learning outcome.

Understanding the discrepancy between perceptions and the results of the assessment

On page three of this document, it is noted that at a preliminary meeting to discuss this learning outcome, the faculty members and administrators present, “focused on three primary weaknesses exhibited across the DCC student body,” which included students’ “simplistic” thinking, reliance on “opinion,” “difficulty reading and thinking critically,” and “difficulty identifying and applying factual evidence to questions.” However, the results of the assessment show that nearly two-thirds of the students studied either met or exceeded the standard set for this learning outcome (and much of the data came from BHS 103 and GOV 121, two courses typically taken in the first semester or two of a student’s program). This fact raises a number of questions: are the assumptions about students’ critical thinking abilities unfounded? Do those assumptions derive only from the one-third of students who do not meet the standards? Do the instructors hold students to a different standard in their minds than they do when tasked with assessing those students’ work with the rubric? Does something happen within the programs that changes the students’ abilities? These questions should be discussed within programs and departments and perhaps addressed in future assessments of the learning outcome.

29

Specific Recommendations: 1. As an institution, DCC should develop a single guiding set of expectations for students, but invite programs, disciplines or courses to further refine the rubric's language to reflect the expectations of the discipline. 2. The Critical Thinking assessment should have a large enough sample of both 100 level and 200 level courses. By specifically including courses taken in the first semester of a program (ENG 101 or BHS 103, for instance) and those taken later (again, the 200-level or capstone courses within a program) we may see if any potential differences in the data might reveal information helpful in determining the overall effectiveness of the program in meeting the learning outcome. 3. DCC should address the merits of assessing sections of both full and part time instructors, and creating a statistical model that would allow sorting and interpreting these results. 4. Criterion 3, which addressed students’ abilities to use carefully considered evidence showed statistically weaker results than criteria one and two. DCC, as a teaching community, should address teaching this skill in a more robust and effective way across the disciplines. 5. A negative correlation was found between course completion and class size. This data supports ongoing discussions on campus around smaller class size and stronger learning outcomes and should be included in the overall analysis of ISLO outcomes. 6. Capping courses and vocational courses might be segregated from other course sets when a large representative sample is desired. 7. The data from each ISLO assessment should be disseminated to Department heads and Program chairs, with further conversation in the PAC. This allows for discussion and action at three different levels and facilitates communication from individual faculty teaching the courses all the way through Department heads and the PAC.