58
1 Using Multiple Measures to Make Math Placement Decisions: Implications for Access and Success in Community Colleges Federick Ngo Will Kwon University of Southern California July 2014 [A more recent version of this manuscript is forthcoming in Research in Higher Education] Corresponding Author: Federick Ngo Rossier School of Education, University of Southern California 3470 Trousdale Parkway, Waite Phillips Hall WPH 503C Los Angeles, CA 90089 Email: [email protected] Phone: 510-326-6037

Using Multiple Measures to Make Math Placement Decisions ......underrepresented minority students were being disproportionately placed into remedial courses (Perry, Bahr, Rosin, &

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

  • 1

    Using Multiple Measures to Make Math Placement Decisions:

    Implications for Access and Success in Community Colleges

    Federick Ngo

    Will Kwon

    University of Southern California

    July 2014

    [A more recent version of this manuscript is forthcoming in Research in Higher Education]

    Corresponding Author:

    Federick Ngo

    Rossier School of Education, University of Southern California

    3470 Trousdale Parkway, Waite Phillips Hall WPH 503C

    Los Angeles, CA 90089

    Email: [email protected]

    Phone: 510-326-6037

    mailto:[email protected]

  • USING MULTIPLE MEASURES TO MAKE MATH PLACEMENT DECISIONS 2

    Fax: 213-740-3889

    Abstract

    Community college students are often placed in developmental math courses based on the results

    of a single placement test. However, concerns about accurate placement have recently led states

    and colleges across the country to consider using other measures to inform placement decisions.

    While the relationships between college outcomes and such measures as high school GPA, prior

    math achievement, and noncognitive measures are well-known, there is little research that

    examines whether using these measures for course placement improves placement decisions. We

    provide evidence from California, where community colleges are required to use multiple

    measures, and examine whether this practice increases access and success in college-level

    courses. Using data from the Los Angeles Community College District, we find that students

    who were placed into higher-level math due to multiple measures (e.g., GPA and prior math

    background) performed no differently from their higher-scoring peers in terms of passing rates

    and long-term credit completion. The findings suggest that community colleges can improve

    placement accuracy in developmental math and increase access to higher-level courses by

    considering multiple measures of student preparedness in their placement rules.

  • USING MULTIPLE MEASURES TO MAKE MATH PLACEMENT DECISIONS 3

    Using Multiple Measures to Make Math Placement Decisions:

    Implications for Access and Success in Community Colleges

    An examination of math assessment and course placement in community colleges shows

    that many students are deemed unprepared for the demands of college-level work. It is estimated

    that over 60 percent of community college students nationally are placed in at least one

    postsecondary remedial or developmental course upon entry (NCPPHE & SREB, 2010; Bailey,

    2009).1 Although developmental courses can serve as necessary and helpful stepping-stones to

    college success, they can also delay access to critical gateway courses necessary for degree

    attainment or transfer to four-year colleges. This is of concern because recent descriptive

    research shows that only a small proportion of students placed in lower levels of developmental

    math sequences enroll in and pass the subsequent math courses needed to attain an associate’s

    degree or transfer (Bailey, Jeong, & Cho, 2010; Fong, Melguizo, Bos, & Prather, 2013). Given

    that students placed in developmental math sequences also incur substantial costs in the form of

    time and money (Melguizo, Hagedorn, & Cypers, 2008), it is critical to accurately assess and

    place students into the courses where they are most likely to succeed while not unnecessarily

    extending their time towards degree completion or transfer.

    Placement tests are commonly used in community colleges across the country to make

    these initial course placement decisions (Hughes & Scott-Clayton, 2011). While practices vary

    by state and even at the local college level, an increasing number of states have mandated

    placement testing and the use of common assessments, seeing placement policies as a potential

    lever for increasing student success (Collins, 2008). At the same time, studies have provided

    evidence that placement tests have low predictive validity and are only weakly correlated with

    1 The terms remedial, developmental, basic skills, and preparatory are often used interchangeably in reference to the set of courses that precede college-level courses. We prefer to use the term developmental.

  • USING MULTIPLE MEASURES TO MAKE MATH PLACEMENT DECISIONS 4

    students’ college outcomes, such as college GPA or credit completion (Armstrong, 2000), and

    that as many as one-quarter of community college students may be severely misassigned to their

    math courses by placement tests (Scott-Clayton, Crosta, & Belfield, 2012). These same studies

    suggest that using other measures, such as information from high school transcripts, may be

    more accurate for placing students than using placement tests alone.

    Amidst these concerns, several states have revised policies to incorporate the use of

    multiple measures in their assessment and placement policies for developmental math (Burdman,

    2012). North Carolina, for example, has developed a customized placement assessment that

    includes gathering information from multiple measures, such as high school grades and

    noncognitive measures (Burdman, 2012). The Texas Success Initiative (TSI) includes revised

    assessment and cut score standards, and includes the recommendation that additional multiple

    measures such as high school GPA, work hours, or noncognitive measures be considered in

    conjunction with assessment test scores (Burdman, 2012; Texas Higher Education Coordinating

    Board (THECB), 2012). Connecticut’s SB-40 and Florida’s Senate Bill 1720 have proposed

    similar policies to incorporate multiple measures.

    While existing studies have shown that measures such as high school GPA or course

    completion are predictive of college outcomes (Belfield & Crosta, 2012; Scott-Clayton, 2012),

    there is little evidence that using these measures to make placement decisions is an effective

    practice in terms of access and success for community college students. This study addresses this

    research gap. We draw upon a statewide placement policy for community colleges to identify

    measures that are useful for assigning students to developmental math courses. Two research

    questions frame our analysis: 1) Does using multiple measures increase access to higher-level

    math courses, particularly for groups disproportionately impacted by remediation? 2) How do

  • USING MULTIPLE MEASURES TO MAKE MATH PLACEMENT DECISIONS 5

    students who are placed using these additional measures into a higher-level math course

    perform in comparison to their peers? We show that two measures in particular – high school

    GPA and information about prior math course-taking and achievement – can increase access to

    higher-level math courses and ensure that students are successful in those courses.

    The evidence comes from California, which has required community colleges to use

    multiple measures to make course placement determinations since the early 1990s (CCCCO,

    2011). This policy shift occurred after advocacy groups challenged the accuracy of placement

    tests and fairness of using tests alone to make placement decisions, based on evidence that

    underrepresented minority students were being disproportionately placed into remedial courses

    (Perry, Bahr, Rosin, & Woodward, 2010). The revised state policy prohibited the practice of

    using a single assessment instrument and instead promoted the use of multiple measures, with

    the goals of mitigating the disproportionate impact of remediation on underrepresented minority

    students and increasing access to college-level courses.2 However, whether the students that

    benefit from this policy are successful in these higher-level courses remains yet to be seen.

    In this study, we examine the extent to which using multiple measures for course

    placement achieves the dual goals of access and success. We present evidence from the Los

    Angeles Community College District (LACCD), the largest community college district in

    California, and one of the largest in the country. During the matriculation process in LACCD,

    students provide additional information regarding their educational background or college plans

    in addition to taking a math placement test. In most of the LACCD colleges, this multiple

    measure information is used to determine whether students should receive points in addition to

    their placement test score, which can sometimes result in a student being placed into the next

    higher-level course. We call this a multiple measure boost. Using district-level administrative

    2 Details of the policy are provided in the Appendix.

  • USING MULTIPLE MEASURES TO MAKE MATH PLACEMENT DECISIONS 6

    and transcript data from 2005-2008, we examine the impact of the multiple measure boost on

    access and success in developmental math. Individual college policies in LACCD allow us to

    focus on two measures in particular: prior math achievement and high school GPA, each of

    which is used singularly some LACCD colleges. These measures have been predicted but not

    proven to more accurately assign students to courses than placement tests alone (Scott-Clayton,

    Crosta, & Belfield, 2012).

    We begin with a review of the literature on measures that are commonly used to identify

    college readiness. Modern conceptions of validation provide the framework that we use to

    examine the usefulness of multiple measures for making placement decisions. Following this

    theoretical discussion, we describe the data and the implementation of the multiple measures

    policy in the LACCD, and provide descriptive evidence addressing the first question of access to

    higher-level courses. Our findings indicate that while using multiple measures does increase

    access to higher-level courses, the racial composition of courses remains largely unchanged. We

    then use multivariate linear regression to compare the outcomes of students who received a

    multiple measure boost into a higher-level course with those of their higher-scoring peers. We

    find that students who received a multiple measure boost based on prior math course-taking or

    high school GPA performed no differently from their peers in terms of course passing rates as

    well as longer-term credit completion. We conclude by discussing the implications of our

    findings for assessment and placement policies in developmental math.

    Literature Review: Identifying Readiness for Developmental Math

    Absent alignment between the K-12 and higher education systems, community colleges

    need some means of identifying students' preparedness for college-level work. However, with

    neither a common definition of college readiness nor a common approach to remediation, a

  • USING MULTIPLE MEASURES TO MAKE MATH PLACEMENT DECISIONS 7

    variety of measures are utilized to identify student skill level and college preparedness (Conley,

    2007; Merisotis & Phipps, 2000; Porter & Polikoff, 2012). These measures often include

    standardized placement test scores and information from high school transcripts, as well as

    information gleaned from student interviews with counselors.

    An important task for researchers has been to identify and validate measures that are

    predictive of college success. Validation has generally involved testing a group of subjects for a

    certain construct, and then comparing them with results obtained at some point in the future,

    such as college persistence, grades, or completion of a college credential (AERA, APA, &

    NCME, 1999; Kane, 2006). This provides an indicator of predictive validity, which is the ability

    of a measure to predict future outcomes given present information. Here, we review the literature

    on the predictive validity of common measures used to identify readiness for college-level work.

    Placement Tests

    Standardized placement tests are the most common instruments that community colleges

    use to assess students and deem them college-ready or place them in developmental math courses

    (Burdman, 2012; Hughes & Scott-Clayton, 2011). These placement tests, many of which are

    now computerized, can be less time-consuming and resource-intensive than interviews or

    reviews of individual applications and transcripts (Hughes & Scott-Clayton, 2011). The

    computerized format can also enable colleges to assess many students and provide course

    placement results more quickly. There is considerable variation in the types of tests used across

    colleges, but ACCUPLACER and COMPASS, two commercially produced tests, are among the

    most common (Hughes & Scott-Clayton, 2011).

    Commercially-produced tests, such as ACCUPLACER, generally provide predictive

    validity estimates for their products (e.g., Mattern & Packman, 2009). In addition, individual

  • USING MULTIPLE MEASURES TO MAKE MATH PLACEMENT DECISIONS 8

    colleges are advised to conduct validations within their own settings and with respect to their

    uses of the assessments (Kane, 2006). However, in an examination of validation practices across

    the U.S., Fulton (2012) found that colleges vary in terms of how they validate their placement

    tests, with only a handful of states or college systems having validation requirements.

    Research studies have provided some evidence that placement tests have low predictive

    validity, finding weak correlations between placement tests and students’ course passing rates

    and college grades (Armstrong, 2000; Belfield & Crosta, 2012; Jenkins, Jaggars, & Roksa, 2009;

    Medhanie et al., 2012; Scott-Clayton, 2012). For example, after investigating the predictive

    validity of placement tests across the Virginia Community College System, Jenkins et al. (2009)

    found only weak correlations between placement test scores and student pass rates for both

    developmental and college-level courses. These findings may reflect the fact that college

    readiness is a function of several academic and non-academic factors that placement tests do not

    adequately capture (Karp & Bork, 2012). In fact, Belfield and Crosta (2012) found that the

    positive but weak association between placement test scores and college GPA disappeared after

    controlling for high school GPA, suggesting that high school information may offer more useful

    measures for course placement.

    High School Information

    While standardized placements tests are the most common instruments that community

    colleges use to assess and place students in developmental math courses, there is growing

    interest in incorporating high school information into the placement decision. High school

    transcripts can provide information about academic competence, effort, and college-readiness

    that placement tests do not measure. For example, high school grades have been found to better

    predict student achievement in college than typical admissions tests do (Geiser & Santelices,

  • USING MULTIPLE MEASURES TO MAKE MATH PLACEMENT DECISIONS 9

    2007; Geiser & Studley, 2003), and this relationship may be even more pronounced in

    institutions with lower selectivity and academic achievement (Sawyer, 2013). This may stem

    from the ability of report card grades to assess competencies associated with students' self-

    control, which can help students study, complete homework, and have successful classroom

    behaviors (Duckworth, Quinn, & Tsukayama, 2012).

    In the community college setting, measures of prior math course-taking, such as the

    number of high school math courses, grades in high school math courses, and highest level of

    math taken have been found to be better predictors of achievement than placement test score

    alone (Lewallen, 1994). Adelman (2006) demonstrated that a composite of student performance

    (i.e., GPA or class rank and course-taking), what he referred to as students' “academic

    resources,” can be useful information for identifying readiness for college-level work and can be

    highly predictive of college success. DesJardins and Lindsay (2007) confirmed these findings in

    subsequent analyses. Similar work in California demonstrates that scores on the California High

    School Exit Exam and high school transcript information are also predictive of math readiness

    (Jaffe, 2012; Long Beach Promise Group (LBPG), 2013). This type of evidence has led some

    community colleges to partner with local school districts and experiment with using high school

    information in developmental course placement (Fain, 2003; LBPG, 2013).

    Hesitation to use high school background information for placement purposes may be due

    to concerns about the consistency of these measures. High school graduation, for example, is not

    widely accepted as evidence of college readiness because of the wide variability in the quality of

    high school experiences (Sommerville & Yi, 2002). Also, there is no common metric or meaning

    across all high schools in regards to student performance and course-taking (Porter & Polikoff,

    2012). Grades and summative assessments from high school vary both in rigor and breadth of

  • USING MULTIPLE MEASURES TO MAKE MATH PLACEMENT DECISIONS 10

    content, making them more difficult for colleges to use systematically as college readiness

    indicators (Maruyama, 2012).

    Nonetheless, the empirical evidence described above suggests that certain combinations

    of measures may be the strongest predictors of college performance (Adelman, 2006; DesJardins

    & Lindsay, 2007). For example, Belfield and Crosta (2012), finding that such measures as prior

    math background in conjunction with high school GPA are strongly associated with college

    outcomes, hypothesized that “the optimal decision rule may be to combine information from a

    placement test with a high school transcript,” (p. 4). Similarly, Noble and Sawyer (2004) argued

    that test scores, high school grades, and other measures could be used jointly to identify students

    who are ready for college-level work.

    Noncognitive Measures

    Research in educational psychology further suggests that an array of factors beyond

    cognitive intelligence and skills are predictive of college success and future outcomes

    (Duckworth, Peterson, Matthews, & Kelly, 2007; Heckman, Stixrud, & Urzua, 2006; Sedlacek,

    2004). Sedlacek (2004), for example, argues that noncognitive measures of adjustment,

    motivation, and perception are strong predictors of success, particularly for under-represented

    minority students. In a longitudinal study of community college students, Porchea, Allen,

    Robbins, and Phelps (2010) found an integration of psychosocial, academic, situational, and

    socio-demographic factors to be predictive of persistence and attainment, with motivation being

    among the strongest predictors of future achievement. This may be due to the ability of these

    variables to capture the effect of unobserved student characteristics associated with success, such

    as the importance of college to a student, preference and perseverance towards long-term goals,

    effort, and self-control (Duckworth et al., 2007; Duckworth et al., 2012; Sedlacek, 2004).

  • USING MULTIPLE MEASURES TO MAKE MATH PLACEMENT DECISIONS 11

    Given these findings, there is increasing interest in and advocacy for using noncognitive

    measures for course placement, which may provide colleges with a vital source of holistic

    student information (Boylan, 2009; Hodara, Jaggars, & Karp, 2012). The ACT and ETS, for

    example, have developed noncognitive assessments such as the ACT ENGAGE assessments and

    the ETS Personal Potential Index (ACT, Inc., 2012; ETS, 2013), which identify noncognitive

    attributes associated with student success in college and are predictive of student performance

    and persistence (Allen, Robbins, Casillas, & Oh, 2008; Robbins, Allen, Casillas, Peterson, & Le,

    2006). In practice however, very few institutions use noncognitive measures for placement

    purposes (Gerlaugh, Thompson, Boylan, & Davis, 2007; Hughes & Scott-Clayton, 2011). This

    may be due to faculty perceptions that self-reported student information is inaccurate or

    irrelevant (Melguizo, Kosiewicz, Prather, & Bos, forthcoming), or to the lack of evidence about

    their ability to improve placement decisions.

    Using Multiple Measures for Course Placement

    This scan of the literature reveals that while researchers have identified cognitive and

    noncognitive measures that are strongly associated with and predictive of student outcomes,

    there is relatively scant evidence showing that using these measures to make course placement

    decisions would be beneficial. This is an important distinction because even though there may be

    a strong positive correlation between a measure such as high school GPA and passing the course

    in which a student enrolled (i.e., predictive validity), we cannot conclude that the same

    relationship would hold if that student was placed into a course under a decision rule that

    incorporated GPA as a placement measure.

    Scott-Clayton et al. (2012) examined both district- and state-wide community college

    data and estimated that placement using high school GPA instead of tests would significantly

  • USING MULTIPLE MEASURES TO MAKE MATH PLACEMENT DECISIONS 12

    reduce the rate of severe placement errors in both developmental math and English courses.

    Aside from these prediction-based estimates, the only empirical evidence on actual placement

    decisions has come from institutional research, such as one experimental study that utilized a

    randomized design to determine the impact of different placement schemes. Marwick (2004)

    found that Latino students in one community college who were placed into higher-level courses

    due to the use of multiple measures (high school preparation and prior math coursework)

    achieved equal and sometimes greater outcomes than when only placement test scores were

    considered. Another report of an on-going study by the Long Beach Promise Group (2013)

    shows that students who were placed in courses via a “predictive placement” scheme based on

    high school grades instead of test scores spent less time in developmental courses and were more

    likely to complete college-level English and math courses.

    Overall, there is limited use of multiple measures during assessment and placement for

    developmental math, and this may stem from a lack of evidence about their ability to improve

    placement decisions. Furthermore, qualitative research has found that faculty and staff often do

    not feel supported in the identification and validation of measures that can be incorporated into

    placement rules, while others perceive measures besides test scores to be insignificant (Melguizo

    et al., forthcoming). Given the numerous studies demonstrating the predictive validity of these

    other measures, it is important to gather evidence on the usefulness of measures for making

    course placement decisions. This involves a process of validation, which is described next.

    Conceptual Framework

    Validation

    The multiple measures mandate in California provides a unique opportunity to validate

    placement criteria in terms of their usefulness for making course placement decisions. This

  • USING MULTIPLE MEASURES TO MAKE MATH PLACEMENT DECISIONS 13

    approach is in line with modern conceptions of validation, which emphasize not just accurate

    predictions, but actual success (Kane, 2006). From this perspective, the validity of a measure such as

    a placement test is based on the decisions or proposed decisions made using the test (AERA et al., 1999;

    Kane, 2006; Sawyer, 2007). A validation argument considers the goals and uses of a measure to be

    more important than its predictive properties, and emphasizes the examination of outcomes that

    result from proposed uses (Kane, 2006). Therefore, in seeking to justify the use of a measure, it

    is necessary to demonstrate that the positive consequences of use outweigh any negative

    consequences. If the intended goals are achieved, then policies can be considered as successes; if

    goals are not achieved, then polices would be considered as failures (Kane, 2006).

    The measures used to make course placement decisions in developmental math would

    thus be evaluated in terms of student outcomes – placement and success in the highest-level

    course possible (Kane, 2006), and the frequency with which these accurate placements occur

    (Sawyer, 1996; 2007; Scott-Clayton, 2012). Following this validation approach, measures used

    for placement would be considered helpful if they place students in a level of coursework where

    they are likely to be successful, and harmful if students are placed in a level where they are

    unlikely to be successful. We next expand this validation argument to consider the use of

    multiple measures in conjunction with test scores to make course placement decisions.

    Placement Decisions

    Assume that a math assessment enables us to make inferences about the academic

    preparation of a math student. Students who receive low scores have low academic preparation

    and students with high scores have high academic preparation. A typical placement policy would

    sort students into various math levels based on scores from this math assessment.

    For a simple model, let:

    SL = Student with low academic preparation CL = Low-level course

  • USING MULTIPLE MEASURES TO MAKE MATH PLACEMENT DECISIONS 14

    SH = Student with high academic preparation CH = High-level course.

    Let P be the probability of successfully passing the course, such that P(SLCL ) ≥ P(SLCH )

    and P(SHCL ) ≥ P(SHCH ); the probability of passing a low-level course is greater than the

    probability of passing a high-level course, for both types of students. Additionally, P(SHCL ) ≥

    P(SLCL) and P(SHCH ) ≥ P(SLCH); the probability of passing a given course is higher for a high

    academic preparation student than for a low academic preparation student. Transitivity should

    predict that P(SHCL ) ≥ P(SLCH), and as result, there are only two possible monotonic

    distributions:

    P(SLCH) ≤ P(SLCL) ≤ P(SHCH) ≤ P(SHCL) (1)

    P(SLCH) ≤ P(SHCH) ≤ P(SLCL) ≤ P(SHCL) (2)

    If the raw assessment test score correctly places students in the appropriate math courses

    (i.e., cutoff scores are correct), every low academic preparation student should be placed into the

    low-level course and every high academic preparation student should be placed into the high-

    level course. The placements (SHCL) and (SLCH) should not occur.

    Placement Using Multiple Measures

    Including multiple measures can be thought of as increasing collateral information, which

    should improve the accuracy of placement decisions (van der Linden, 1998). Consider a decision

    in which other relevant information from multiple measures is included and students can earn

    additional points which are added to the raw test score. In some cases, students identified as low

    academic preparation by the raw test score may be placed higher if the total score with additional

    points surpasses the cutoff score. This multiple measure boost thus places the low academic

    preparation student into the higher-level course, making SLCH possible. The boosted students

    would have had among the highest scores on the placement test had they remained in the lower

  • USING MULTIPLE MEASURES TO MAKE MATH PLACEMENT DECISIONS 15

    level. As a result of the multiple measure boost, they are now the lowest-scoring students in the

    higher-level course.

    The question of interest is whether the boosted students are equally likely to succeed

    when compared with other students in the higher-level course despite having lower raw

    placement test scores. Following the approach to validation suggested by Kane (2006), the

    multiple measure boost can be considered as helpful if boosted students are at least as likely to

    pass the higher-level course as their comparable peers. Should the boost be helpful, then there is

    an increase in placement accuracy.3 The boost is harmful if the boosted students are less likely to

    pass the high-level course than their peers. In this case, the student would be better served if

    placed in the lower-level course. Empirically, the comparison of probabilities is between

    P(SLCH) and P(SHCH), where the boosted student is compared with other non-boosted students in

    the high-level course. The multiple measure boost can be considered as helpful if P(SLCH) ≈

    P(SHCH) or harmful if P(SLCH) < P(SHCH).4 We use this validation argument to proceed with our

    analysis of student outcomes in the Los Angeles Community College District (LACCD), a

    context where multiple measures are used in conjunction with test scores to inform placement

    decisions in developmental math.

    Setting: Multiple Measures in the LACCD

    The LACCD is composed of nine community colleges serving nearly 250,000 students

    annually, making it the largest community college district in California and one of the largest in

    the country. According to our calculations, roughly 80 percent of students entering the LACCD

    each year are placed in developmental math courses. In most of the colleges, the developmental

    math sequence is comprised of four courses and includes arithmetic, pre-algebra, algebra, and

    3 Unobservable factors such as easiness of grading or grade inflation at the classroom level could make it possible for boosted students to have a

    higher probability of passing the higher-level course than the lower-level course: P(SLCH) > P(SLCL). 4 Unobservables factors such as diligence/effort could make it possible for the boosted students to have a greater probability of passing the high-level course than more academically-prepared students: P(SLCH) > P(SHCH).

  • USING MULTIPLE MEASURES TO MAKE MATH PLACEMENT DECISIONS 16

    intermediate algebra. This means an entering student can be placed several levels below college-

    level, extending time towards degree or certificate attainment.

    According to college policies, students seeking to enroll in degree-applicable or transfer-

    level math courses in one of the LACCD colleges must take an assessment test to determine

    course placement.5 The LACCD colleges have opted to use the ACCUPLACER, COMPASS, or

    Mathematics Diagnostic Testing Program (MDTP) to assess and place students. The

    ACCUPLACER and COMPASS are computer-adaptive standardized tests developed by College

    Board and ACT, respectively. The MDTP, a joint project of the California State University and

    the University of California, is a set of math diagnostics designed to measure student readiness

    for mathematics. During the period of this study, 2005-2008, five of the LACCD colleges used

    the ACCUPLACER, two of the colleges used COMPASS, and two colleges used the MDTP to

    make course placement decisions.

    Using Multiple Measures

    Revisions to the California Code of Regulations in the early 1990s prohibited community

    colleges from using single assessment instruments to place students in remedial courses. The

    intent was to mitigate disproportionate impact on access to college-level courses for

    underrepresented student populations through the use of multiple measures (see Appendix for a

    more in-depth overview of the policy). In addition to standardized test scores, multiple measures

    can include measures of a student’s prior academic achievement and other noncognitive

    attributes, such as educational goals or motivation. The regulations do not formalize a specific

    statewide assessment and placement process, so colleges are afforded autonomy in determining

    which measures to consider, so long as the measures are not discriminatory (e.g., based on race,

    5 There is a “challenge” process in which students can waive pre-requisites if they provide adequate evidence of their math preparation. Our data suggest that less than 5% of enrolled students complete this process.

  • USING MULTIPLE MEASURES TO MAKE MATH PLACEMENT DECISIONS 17

    ethnicity, or gender). Some manuals provide guidance on how to appropriately select and

    validate measures at the institutional level (CCCCO, 1998; CCCCO, 2011; Lagunoff, Michaels,

    Morris, & Yeagley, 2012), but the devolved autonomy has resulted in considerable variation in

    the multiple measures utilized across the LACCD (Melguizo et al., forthcoming).

    The information can be gathered through writing samples, performance-based

    assessments, surveys and questionnaires, student self-evaluations, counseling interviews during

    the enrollment period, or other processes (CCCCO, 2011; Melguizo et al., forthcoming). Most

    often, information is collected through a survey taken before or after the assessment test and

    points are rewarded or even deducted for various responses. These are combined with the

    student’s placement test score and result in a final score used to make a course placement

    recommendation based on each college’s set of cutoff scores. Table 1 shows the multiple

    measures used to supplement student placement test scores in eight of the nine LACCD colleges

    for which multiple measures information was available.

    [Insert Table 1. Multiple measures used for math placement, about here]

    As Table 1 shows, each college has also chosen to utilize a different combination of

    measures. For example, Colleges B and G award a varying amount of points for college plans,

    high school GPA, and previous math courses taken. Furthermore, while most of the schools add

    multiple measure points to the test score, two schools in LACCD subtract points for selected

    responses. College F gives points for what we term college plans (which include the number of

    units a student plans to take and the number of hours they plan to work while taking college

    classes), and the degree to which college or math is important to the student (which we classify

    as motivation), an example of a noncognitive measure. It also deducts points if the student is a

    returning student but has not been enrolled for several years. It is important to note that at no

  • USING MULTIPLE MEASURES TO MAKE MATH PLACEMENT DECISIONS 18

    time during the assessment process are students made aware of the college’s cut scores or the

    formula used for placement.

    Given these assessment and placement rules in the LACCD, the addition or subtraction of

    multiple measure points can sometimes be the determining factor in course placement. The

    multiple measure points awarded can be enough to place students into a higher-level course or

    place them into a lower-level course. As described earlier, students are considered to have

    received a multiple measure boost if the additional multiple measure points placed the students in

    a math course one level higher than they otherwise would have been by raw test score alone.

    Although there are two colleges that use multiple measure information to subtract points and

    drop students down into a lower-level course, this does not happen frequently enough to warrant

    further investigation.6

    Data

    We obtained the data used for the study through a restricted-use agreement with the

    LACCD. We examined the assessment and enrollment information for all first-time students who

    took a placement test between the 2005/06 and 2007/08 academic years. Transcripts provided

    outcome data through the spring of 2012, which resulted in seven years of outcome data

    available for the 05/06 cohort, six years for the 06/07 cohort, and five years for the 07/08 cohort.

    For the access analysis, we restrict the sample to seven out of nine LACCD colleges: College C

    was not included because it did not have information on multiple measures during the period of

    the study; College G was also not included because it used multiple measures in conjunction

    with multiple test score cutoffs in a way that made it non-comparable with the other colleges.7

    6 In College J, only 27 out of 4,303 students earned negative multiple measure points, and of those, only 2 were placed in a lower-level course as

    a result of point deductions. 7 For analysis of the use of multiple measures and multiple cutoffs in College G, see Author (2014).

  • USING MULTIPLE MEASURES TO MAKE MATH PLACEMENT DECISIONS 19

    The full sample of assessed students for these seven colleges between 2005 and 2008 includes

    44,228 students.

    The rich assessment data enable us to identify each student whose raw test score was

    below the cutoff score at the institution in which they took the placement test, but whose

    multiple measure points resulted in an adjusted test score that was above the cutoff score.

    Students who met these criteria were coded as having received the multiple measure boost. This

    enabled us to determine the total number of students who received a multiple measure boost in

    each college between 2005 and 2008, as well as examine the number of boosted students by

    college and level of developmental math.

    Multiple Measures and Access to Higher-Level Courses

    The first set of findings examines the usefulness of multiple measures in increasing

    access to higher-level math courses. Table 2 shows the percentage of students boosted into a

    higher-level course due to the multiple measure reward structure at seven LACCD colleges for

    which multiple measure boosts could be determined. Overall, only 4.23 percent of all students in

    this sample were boosted to the next level course between 2005/06 and 2007/08 academic years.

    That is, although their raw test score would have placed them in the lower course, the addition of

    multiple measure points caused them to surpass the cutoff score and be placed into a higher-level

    course. Although the percentages vary by college, very few students overall are moved to higher-

    level courses.

    [Insert Table 2. Students receiving multiple measure boosts, about here]

    One explicit goal of the Title 5 revisions was to mitigate disproportionate impact on the

    number of underrepresented minority students being placed into remediation. To examine

    disproportionate impact, we calculated math placement rates for each racial subgroup using the

  • USING MULTIPLE MEASURES TO MAKE MATH PLACEMENT DECISIONS 20

    adjusted test scores including multiple measure points (i.e., sample means). We then simulated

    counterfactual course placements for each student by using unadjusted test scores without

    multiple measure points. Placement rates are provided for two colleges, A and H, which are also

    the subject of our multivariate analyses described in the next section. These two colleges each

    use one type of additional measure for course placement: prior math course-taking in College A

    and self-reported high school GPA in College H.

    We present the disproportionate impact results in two ways. First we looked at the overall

    course placements by race. Then, we show the distribution of students by race within each level

    of developmental math. Comparing the actual placements with the simulated counterfactual

    placements both ways enabled us to determine the extent to which the use of multiple measures

    mitigated disproportionate impact of remediation by racial subgroup. Table 3 shows the results

    of the simulated placements without multiple measure points and actual placement with multiple

    measure points by level of developmental math for Latino and African-American students. Table

    4 shows placement by racial subgroups within pre-algebra.

    [Insert Tables 3 & 4 about here]

    The results indicate that the use of multiple measures as currently operationalized in these

    LACCD colleges only marginally increases the number of underrepresented minority students

    being placed in higher levels of math. For example, in Table 3 we see that under College A's

    multiple measure policy, about 3.4 percent fewer African-American and 2.4 percent fewer Latino

    students were referred to arithmetic, the lowest-level course in the developmental math sequence.

    There was also a 1.5 percent increase in the number of Latino students being placed in

    Intermediate Algebra, the highest-level course in the developmental math sequence.

  • USING MULTIPLE MEASURES TO MAKE MATH PLACEMENT DECISIONS 21

    Although the use of multiple measure points increased access to higher-level courses for

    African-American and Latino students, the results in Table 4 show that the overall racial

    composition of math classes remains largely unchanged, with no statistical difference even at the

    10 percent level. We only present the distribution of students by racial subgroups within pre-

    algebra, but the results are similar for all math levels (these results are available in the

    Appendix). This evidence suggests that despite the current use of multiple measures in the

    LACCD colleges, there continues to be disproportionate impact in assignment to remediation.

    Multiple Measures and Student Success

    While this descriptive analysis offers some insight into the efficacy of multiple measures

    in increasing access to higher-level math courses, one of the goals of the multiple measures

    policy, it is also important for community colleges to design and use assessment and placement

    policies that promote student success. Students should be placed into courses where they are

    likely to succeed given their level of college readiness and math skills. To estimate the

    association between multiple measures and student success outcomes, we used linear probability

    regression models to compare the outcomes of students who were boosted into a higher-level

    course due to added multiple measure points with students whose higher test scores placed them

    directly into the same course. The short-range outcome of interest is a dichotomous variable

    indicating whether or not the student passed the first enrolled math course with a C or better (the

    one in which the student was placed). Scott-Clayton et al. (2012) noted the potential controversy

    of using earning a C as an outcome since developmental educators and policy-makers may think

    of getting C as a mediocre achievement. However, since students in the LACCD who earn a C

    are considered as having completed the prerequisite and can move on to the next course, we

    believe that earning a C is an appropriate short-term outcome for examining placement accuracy

  • USING MULTIPLE MEASURES TO MAKE MATH PLACEMENT DECISIONS 22

    in this context. The transcript data also allow us to examine two important longer-term outcomes

    for community college students—total of number of degree-applicable units completed and total

    number of transfer-level units completed. Degree-applicable units are those which can be applied

    towards an associate’s degree, and transfer-level units are those which would be accepted at a

    California four-year university.

    The linear probability regression model is:

    yi = α + β1BOOSTi + β2MMPOINTSi + β3TESTi + γX’i + εi

    where yi is the outcome of interest. The treatment variable of interest is BOOSTi, a dichotomous

    variable indicating whether or not the student received multiple measure points that resulted in a

    boost to the next highest level math course. MMPOINTSi is the number of multiple measure

    points a student received, and TESTi is the student’s normalized test score by test type, which

    allows for comparison across math levels within each college. The normalized score for each

    student provides an indicator of the student’s ability relative to other students who took the same

    test. We also include dummy variables indicating placement subtest and math placement level,

    which serve as a control for any variation that may be related to the different placement tests

    used for each level or in each college.8 Finally, X’i is a vector of student information including

    age, race, sex, language spoken at home, and assessment cohort. Including these background

    variables enables us to obtain a more precise estimate of the relationship between the multiple

    measure boost and the outcomes of interest.

    Two Focus Colleges

    We focus on the effect of the multiple measure boost in two LACCD colleges: College A,

    which awards multiple measure points based solely on a student’s prior math background, and

    8 The ACCUPLACER, for example, has different subtests such as Arithmetic or Elementary Algebra. Colleges use different subtest scores to make placement decisions.

  • USING MULTIPLE MEASURES TO MAKE MATH PLACEMENT DECISIONS 23

    College H, which awards multiple measure points based solely on a student’s self-reported high

    school GPA. Since the multiple measure boost is determined by a single measure in addition to

    the placement test score, we can determine the effectiveness of that specific measure in

    increasing placement accuracy. College A awards one point for each of the following prior math

    background measures: the highest level of math previously taken with a grade of C or better (+1

    for trigonometry or higher), the number of years of math taken in high school (+1 for three years

    or more), the length of time since math was last taken (+1 if less than one year), and whether or

    not the student has taken algebra (+1). Students who take the placement test (ACCUPLACER) at

    College A can score a maximum of 120 points and earn a maximum of four multiple measure

    points. College H awards two additional points for a high school GPA in the B to B- range, and

    four additional points for a high school GPA in the A to A- range. Students who take the

    placement test at College D can score between 40 and 50 points depending on the last subtest

    that they take (MDTP). We will also discuss results from pooled analyses with two additional

    colleges (D and E), but choose to highlight College A and H because they offer the largest ranges

    of additional multiple measure points and the possibility of examining long-term outcomes.

    Comparison Groups

    We run two linear probability regression models for each of the colleges. First, we

    compare boosted students to other students whose test scores are in a narrow bandwidth around

    their own. In the second model, we include all students within a given course level. To illustrate

    this, consider College A, for which the cut score for placement in pre-algebra is 35 on the

    ACCUPLACER Arithmetic subtest (AR). Students who attain a score of 35 and above are placed

    in pre-algebra (three levels below transfer) while students scoring below 35 are placed in

    arithmetic (four levels below transfer). The multiple measure boost could have pushed a student

  • USING MULTIPLE MEASURES TO MAKE MATH PLACEMENT DECISIONS 24

    from arithmetic to pre-algebra if the addition of multiple measure points pushed the adjusted

    ACCUPLACER score (raw score + multiple measure points) to 35 or above. For these boosted

    students, the range of raw AR scores is 31≤ARr≤34.9 with a maximum of four multiple measure

    points. Their resulting adjusted AR score is 35≤ARa≤38.9. In the first regression model

    (Around), we compare the boosted students with 35≤ARa≤38.9 to the non-boosted students

    whose raw AR tests scores were in the range 35≤ARr≤38.9. In the second regression model

    (Entire), we compare the boosted students to the entire range of students in the same course

    level. In College A, students can get placed into pre-algebra with a score of 35≤ARa

  • USING MULTIPLE MEASURES TO MAKE MATH PLACEMENT DECISIONS 25

    enrolled and enrolled students. In both colleges, there is no statistical difference in either of the

    groups. This is the desired result since bias would be introduced if students enrolled or attempted

    the placed math course based on receiving a multiple measure boost. There are several

    significant differences in terms of demographic characteristics between the students who enroll

    in their placed math course versus those who do not, as well as between those who are boosted

    versus those who are not. However, there appears to be no relationship between receiving a

    multiple measure boost and enrolling in a math course.

    [Insert Table 5, Boosted and enrolled students, about here]

    We therefore proceed with enrolled students only, providing an estimate of the treatment

    on the treated effect. The sample includes 5,279 out of 8,323 students in College A, or about 63

    percent of the original sample, and 7,575 out of 10,349 students, about 73 percent of the original

    sample, in College H. These students enrolled within a year of assessment and attempted the

    math course in which they were placed. The one year time window is intended to allow sufficient

    time for students to enroll in courses as well as to limit the change in the mathematical

    knowledge of the students since the assessment. Students who did not comply with the course

    placement were also excluded since the goal of the analysis is to determine placement accuracy.

    Overall, fewer than 5 percent of students enrolled in a course other than the one in which they

    were placed.9 Rather than assigning a value of zero for passing the attempted math course for

    those who never enrolled and calculating the intent to treat effect, we reasoned that students

    being unaware of the placement process and whether or not a boost was received would help to

    limit selection bias. Students are not informed of the placement criteria or placement rules of the

    college. After the placement test, the student simply receives a summary of their score and a

    9 This figure includes students who took higher-level courses, which is possible if students challenge their placement and receive permission to enroll in a higher-level course, and students who chose to take lower-level courses.

  • USING MULTIPLE MEASURES TO MAKE MATH PLACEMENT DECISIONS 26

    course recommendation. Importantly, the student does not know if placement into a particular

    level was the result of a multiple measure point boost, thus limiting the possibility that students

    would have exploited the system to attain additional points. This is in line with qualitative

    research highlighting the fact that community college students generally feel uninformed and

    unaware of community college assessment and placement policies (Venezia, Bracco, & Nodine,

    2010).

    Main Findings

    A summary of our linear probability regression estimates are presented in Tables 6 and 7.

    In these summarized tables, we show differences in outcomes between boosted students and their

    higher-scoring peers.10

    Two sets of regression results are provided for each outcome. In the

    “Around” columns, we restricted the analytical sample to those students within a narrow

    bandwidth of test scores around the boosted students who took the same subtest.11

    In the “Entire”

    columns, we included all students within the course level who took the same subtest. Overall, we

    observe strong positive relationships between test scores and student outcomes, and between the

    number of multiple measure points and student outcomes, suggesting that both are strongly

    predictive of student achievement. We hereafter focus our results on the multiple measure boost.

    Prior math. Column 1 of Table 6 shows that, all else constant, lower-scoring students in

    College A who received a multiple measure boost (based on prior math) that placed them in a

    higher-level course performed no differently from their similar-scoring peers in terms of passing

    the first math course they enrolled in. However, when comparing boosted students to all other

    students in the same math level, we do observe a statistically significant decrease in the

    probability of passing the first math class, by about 8 percentage points (p

  • USING MULTIPLE MEASURES TO MAKE MATH PLACEMENT DECISIONS 27

    be expected since the highest-scoring students in the same math level can have test scores that

    are 30 points higher than the lowest-scoring students in the level. In terms of long-term

    outcomes, columns 3-6 indicate that there are no statistically significant differences in the total

    number of degree-applicable and transfer-level credits that boosted students completed through

    spring 2012. In other words, students who received a multiple measure boost had the same

    degree and transfer progress as their higher-scoring peers.

    [Insert Table 6. Regression results, College A, about here]

    We also pooled students from Colleges A and D together, since both utilized measures of

    prior math to determine the multiple measure boost.12

    Results from these pooled estimates are

    similar to those from College A alone – students who received a multiple measure boost were no

    less likely to pass the math course in which they were placed. We do not show the pooled results

    here because estimates of long-term outcomes may be more attributable to differences between

    colleges and placement policies than to the multiple measure boost.13

    These results are available

    in the Appendix.

    High school GPA. Students in College H are awarded a maximum of four multiple

    measure points based on their self-reported high school GPA. The results in Table 7 indicate

    that, with respect to the linear probability of passing the first enrolled math course and eventual

    credit accumulation, most of the differences between boosted students and their peers were not

    statistically significant. In Column 1, for example, we observe that boosted students performed

    no differently from peers with test scores around the placement cutoff. Surprisingly, when

    comparing boosted students to all other students in the same math level, students who were

    placed into higher level courses as a result of a multiple measure boost actually had a 6.16

    12 We thank the anonymous reviewers for this suggestion. The full pooled results for College A and D are available in the Appendix. 13 For example, both Colleges D and E assign students to what we call “extended” algebra courses, which extends the developmental math sequence by an additional semester.

  • USING MULTIPLE MEASURES TO MAKE MATH PLACEMENT DECISIONS 28

    percent higher likelihood of passing the course than their higher-scoring peers (p

  • USING MULTIPLE MEASURES TO MAKE MATH PLACEMENT DECISIONS 29

    instruction or grading in order to meet the needs of students, which would bias the estimate of

    the relationship between multiple measure boosts and student outcomes.

    To address these concerns, we make the assumption that instructors do not adjust their

    practices much in response to student academic preparation, and this is reasonable given that a

    relatively small percentage of students in any given course would have received the boost (about

    4.2 percent of students on average for seven of nine LACCD colleges). Furthermore, the non-

    significant estimates we obtained in this analysis are consistent across nearly all levels of

    developmental math, as well as in both colleges, which suggests that it is unlikely that cut scores

    are incorrect across the board or that instructor practices are systematically forgiving to

    students.14

    Finally, our analytic approach consisted of two models – one in which we compared

    boosted students to students with similar placement test scores, and another in which we

    compared boosted students to all other students in their level within a given institution. These

    results demonstrated that boosted students were not only as successful as the students within a

    similar score range, but in most cases also as successful as students within the entire level. Given

    that students at a given level of developmental math can have differences as large as 30 points on

    their placement tests scores, we remain confident that our estimates of the effect of receiving a

    multiple measure boost are consistent and reliable.

    Discussion

    Increasing Placement Accuracy

    This evidence from the LACCD is timely given the changing landscape of placement

    testing for developmental education. As mentioned in the introduction, several states, such as

    Connecticut, Florida, North Carolina, and Texas, have already or are in the process of revising

    developmental education assessment and placement policies to incorporate multiple measures.

    14 Analyses by level of developmental are available from the authors upon request.

  • USING MULTIPLE MEASURES TO MAKE MATH PLACEMENT DECISIONS 30

    However, aside from predictive validity estimates, there are few sources of evidence from actual

    placement policies. The findings of this study are important because there is limited

    understanding of what measures can be validly used to make course placement decisions. In

    addition, qualitative research with community college faculty and staff has shown that

    practitioners do not feel supported in measure selection and validation, and that they sometimes

    perceive measures to be insignificant (Melguizo et al., forthcoming). Even though coordinating

    entities such as the CCCCO provide guides for multiple measure use, there is limited evidence

    and validation of measures that can be used to inform placement decisions.

    Our analysis of LACCD data provides validation for two specific measures – prior math

    background and high school GPA. Even though these measures are known to be predictive of

    college outcomes, current conceptions of validation highlight the need to examine actual

    outcomes in contexts where measures are used to make placement decisions (Kane, 2006). The

    results suggest that community colleges can increase placement accuracy by using multiple

    measure information in conjunction with placement test scores. Evidence from the LACCD

    colleges demonstrates that those students who were placed into higher-level developmental math

    courses using multiple measures performed no differently from their higher-scoring peers. Since

    these students were given the opportunity to take a higher-level course and performed at least as

    well as their higher-scoring peers, these students were more accurately placed than they would

    have been by placement test scores alone.

    One implication of the findings for the colleges which used measures of prior math is that

    these measures can supplement test scores but not necessarily replace them. We observed

    improvements in placement accuracy for students who scored around the placement cutoff, but

    these students did not match or outperform their higher-scoring peers. This suggests that

  • USING MULTIPLE MEASURES TO MAKE MATH PLACEMENT DECISIONS 31

    although measures of prior math can increase placement accuracy, they should probably be used

    in conjunction with placement test scores.

    Like other studies, we also found that high school GPA is highly predictive of college

    persistence and success. However, the finding that students who received a multiple measure

    boost based on GPA outperformed the entire range of students in the same level suggests that

    GPA may be a very useful measure for making placement decisions, and underscores the role of

    effort and self-control in college achievement (Duckworth et al., 2012). Further research should

    examine the extent to which making placement decisions based solely on GPA, either self-

    reported or obtained directly from high school transcripts, can lead to even greater improvements

    in placement accuracy. Of course, colleges would need to take caution so as not to induce

    students to game the system by reporting higher or lower GPAs.

    Overall, the findings indicate that these two measures can be systematically used to

    improve course placement decisions. Using them in conjunction with test scores can increase

    placement accuracy and may be, as Belfield and Crosta (2012) suggest, closer to the optimal

    decision rule for placement in developmental math.

    Promoting Access and Success

    This examination of community college assessment and placement policies also

    highlights the underlying tension between the goals of access and success when making

    placement decisions. Indeed, promoting progression versus maintaining standards is one of the

    “opposing forces” that community colleges often operate under (Jaggars & Hodara, 2013; Perin,

    2006). Community colleges have the responsibility to place students in courses in which they are

    most likely to succeed given their math skills while simultaneously promoting progression

    towards completion and attainment.

  • USING MULTIPLE MEASURES TO MAKE MATH PLACEMENT DECISIONS 32

    The results of the study demonstrate that multiple measures can be utilized to achieve

    both of these goals. Based on this evidence from the LACCD colleges, students who received a

    multiple measure boost based on prior math and high school GPA took higher-level courses and

    succeeded in them at rates no different from their higher-scoring peers. Using these additional

    student background measures in conjunction with test scores to make course placement decisions

    may therefore achieve the goals of increasing access and ensuring student success.

    Nonetheless, while boosted students are just as likely to be successful as their peers, our

    analyses also show that the goals of mitigating disproportionate impact in remediation are not

    being fully realized. The California state policy explicitly states that assessment practices should

    not result in disproportionate impact on any underrepresented minority group. Our simulated

    placements with and without multiple measures show that the use of these particular multiple

    measures only marginally increased access to higher level math courses for African-American

    and Latino students. Community colleges should therefore continue to explore other ways to

    improve assessment and placement such that disproportionate placement in remediation is

    mitigated while the likelihood of student success is maximized.

    Some research shows that noncognitive measures may be useful for identifying college

    readiness and promoting access and success in college, particularly for underrepresented

    minority students (Sedlacek, 2004). The validity of these measures has yet to be explored in the

    context of developmental education. Even though some of the LACCD colleges' placement rules

    include noncognitive measures such as college plans, educational goals, availability of social

    supports, and motivation, these are often used in conjunction with other measures. We were thus

    unable to identify the singular effect of using these to make placement decisions. In addition,

    colleges that used these measures also weighted multiple measures in such a way that, relative to

  • USING MULTIPLE MEASURES TO MAKE MATH PLACEMENT DECISIONS 33

    Colleges A and H, very few students received a multiple measure boost. Further research should

    focus on validating other cognitive and noncognitive measures that can be useful for identifying

    incoming student readiness, specifically those that increase access to higher-level courses for

    underrepresented student populations.

    Conclusion

    The multiple measures policy in California provides an opportunity to validate measures

    in terms of their usefulness for course placement. The results of this study indicate that students

    who were placed into higher-level courses using information from multiple measures, in this case

    high school GPA and prior math course-taking, performed no differently from their peers who

    earned higher test scores. This suggests that community colleges can systematically improve

    placement accuracy by using student background information in addition to assessment data to

    make initial course placement decisions. Such policies would increase access to higher-level

    math without decreasing students’ chances of success in the first math course in which they

    enroll or eventual credit accumulation. We do recognize that these data may be unavailable for

    non-traditional students or international students whom community colleges serve in substantial

    numbers. Further research should be done to identify and validate a broader range of measures

    that can improve placement accuracy for all types of students. Still, using multiple sources of

    information about incoming community college students not only increases access to higher-

    level math courses but can also ensure that students are placed at a level where they are likely to

    be successful. This can ultimately promote equity and efficiency in the assessment and

    placement process, accelerate college completion, and reduce the financial and academic burdens

    of postsecondary remediation.

  • USING MULTIPLE MEASURES TO MAKE MATH PLACEMENT DECISIONS 34

    References

    ACT, Inc. (2012). ENGAGE College user’s guide. Iowa City, IA: ACT, Inc.

    Adelman, C. (2006). The toolbox revisited: Paths to degree completion from high school through

    college. Washington, D.C.: Office of Vocational and Adult Education, U.S. Department

    of Education.

    Allen, J., Robbins, S. B., Casillas, A., & Oh, I. S. (2008). Third-year college retention and

    transfer: Effects of academic performance, motivation, and social connectedness.

    Research in Higher Education, 49(7), 647-664.

    American Educational Research Association, American Psychological Association, & National

    Council on Measurement in Education. (1999). Standards for educational and

    psychological testing. Washington, DC: AERA, APA, & NCME.

    Armstrong, W. B. (2000). The association among student success in courses, placement test

    scores, student background data, and instructor grading practices. Community College

    Journal of Research and Practice, 24(8), 681-695.

    Author. (2014).

    Bailey, T. (2009). Challenge and opportunity: Rethinking the role and function of developmental

    education in community college. New Directions for Community Colleges, 145, 11-30.

    Bailey, T., Jeong, D. W., & Cho, S. W. (2010). Referral, enrollment, and completion in

    developmental education sequences in community colleges. Economics of Education

    Review, 29(2), 255-270.

    Belfield, C. R., & Crosta, P. M. (2012). Predicting success in college: The importance of

    placement tests and high school transcripts (CCRC Working Paper No. 42). New York,

    NY: Community College Research Center.

  • USING MULTIPLE MEASURES TO MAKE MATH PLACEMENT DECISIONS 35

    Boylan, H. (2009). Targeted intervention for developmental education students (TIDES). Journal

    of Developmental Education, 32(3), 14-23.

    Brewer, D., & Smith, J. (2008). A framework for understanding educational governance: The

    case of California. Education Finance and Policy, 3(1), 20-40.

    Burdman, P. (2012). Where to begin? The evolving role of placement exams for students starting

    college. Boston, MA: Jobs for the Future.

    Cage, M. C. (1991, June 21). Cal. Community Colleges System Agrees to Change Role of

    Testing. The Chronicle of Higher Education, pp. A21.

    California Code of Regulations, Title 5, §§ 58106, 55002, 55201, 55510-55532

    California Community Colleges Chancellor's Office. (1998). Multiple measures and other

    sorrows: A guide for using student assessment information with or instead of test scores.

    Sacramento, CA: Author. Retrieved from http://extranet.cccco.edu/Portals/1/SSSP/

    Matriculation/Assessment/MultipleMeasuresAndOtherSorrowsMarch1998.pdf

    California Community Colleges Chancellor's Office. (2011). California community colleges

    matriculation program handbook. Sacramento, CA: Author. Retrieved from

    http://extranet.cccco.edu/Portals/1/SSSP/Matriculation/MatriculationHandbookRevSepte

    mber2011.pdf

    Collins, M. L. (2008). It's not just about the cut score: Redesigning placement assessment

    policies to improve student success. Boston, MA: Jobs for the Future.

    Conley, D. T. (2007). Toward a more comprehensive conception of college readiness. Eugene,

    OR: Educational Policy Improvement Center.

    DesJardins, S. L., & Lindsay, N. K. (2007). Adding a statistical wrench to the "toolbox".

    Research in Higher Education, 49(2), 172-179.

  • USING MULTIPLE MEASURES TO MAKE MATH PLACEMENT DECISIONS 36

    Duckworth, A. L., Peterson, C., Matthews, M. D., & Kelly, D. R. (2007). Grit: Perseverance and

    passion for long-term goals. Journal of Personality and Social Psychology, 92(6), 1087–

    1101.

    Duckworth, A. L., Quinn, P. D., & Tsukayama, E. (2012). What No Child Left Behind leaves

    behind: The roles of IQ and self-control in predicting standardized achievement test

    scores and report card grades. Journal of Educational Psychology, 104(2), 439-451.

    Educational Testing Service (ETS). (2013). ETS Personal Potential Index: Evaluator user’s

    guide. Princeton, NJ: Author.

    Fain, P. (2013, February 19). Redefining college-ready. Inside HigherEd. Retrieved from:

    http://www.insidehighered.com/news/2013/02/19/two-community-colleges-get-serious-

    about-working-k12

    Florida Senate Bill No. 1720. Retrieved from http://laws.flrules.org/files/Ch_2013-051.pdf

    Fong, K., Melguizo, T., Bos, H., & Prather, G. (2013). A different view on how we understand

    progression through the developmental math trajectory. Los Angeles, CA: Rossier

    School of Education, University of Southern California. Retrieved from

    http://www.uscrossier.org/pullias/research/projects/sc-community-college/

    Fulton, M. (2012). Using state policies to ensure effective assessment and placement in remedial

    education. Denver, CO: Education Commission of the States.

    Geiser, S., & Santelices, M. (2007). Validity of high-school grades in predicting student success

    beyond the freshman year: High-school record vs. standardized tests as indicators of

    four-year college outcomes. Berkeley, CA: Center for Studies in Higher Education.

    Geiser, S., & Studley, R. (2003). UC and the SAT: Predictive validity and differential impact of

    the SAT I and SAT II at the University of California. Educational Assessment, 8(1), 1-26.

  • USING MULTIPLE MEASURES TO MAKE MATH PLACEMENT DECISIONS 37

    Gerlaugh, K., Thompson, L., Boylan, H., & Davis, H. (2007). National study of developmental

    education II: Baseline data for community colleges. Research in Developmental

    Education, 20(4), 1-4.

    Heckman, J., Stixrud, J., & Urzua, S. (2006). The effects of cognitive and noncognitive abilities

    on labor market outcomes and social behavior. Journal of Labor Economics, 24(3), 411–

    482.

    Hodara, M., Jaggars, S. S., & Karp, M. M. (2012). Improving developmental education

    assessment and placement: Lessons from community colleges across the country (CCRC

    Working Paper No. 51). New York, NY: Community College Research Center.

    Hughes, K. L., & Scott-Clayton, J. (2011). Assessing developmental assessment in community

    colleges. Community College Review, 39(4), 327-351.

    Jaggars, S. S., & Hodara, M. (2011). The opposing forces that shape developmental education.

    Community College Journal of Research and Practice, 37(7), 575-579.

    Jaffe, L. (2012, October). Mathematics from high school to community college: Preparation,

    articulation, and college un-readiness. Paper presented at the 2012 Annual Conference of

    the Research and Planning Group for California Community Colleges.

    Jenkins, D., Jaggars, S. S., & Roksa, J. (2009). Promoting gatekeeper course success among

    community college students needing remediation: Findings and recommendations from a

    Virginia Study. New York, NY: Community College Research Center. Retrieved from:

    http://eric.ed.gov/?id=ED507824

    Kane, M. T. (2006). Validation. In R. L. Brennan (Ed.), Educational Measurement (4th

    ed., pp.

    17–64). Westport, CT: ACE/Praeger Publishers.

  • USING MULTIPLE MEASURES TO MAKE MATH PLACEMENT DECISIONS 38

    Karp, M. M., & Bork, R. H. (2012). "They never told me what to expect, so I didn't know what to

    do": Defining and clarifying the role of a community college student (CCRC Working

    Paper No, 47). New York, NY: Community College Research Center.

    Lagunoff, R., Michaels, H., Morris, P., & Yeagley, P. (2012). A framework for evaluating the

    technical quality of multiple measures used in California community college placement.

    San Francisco, CA: WestED.

    Lewallen, W. C. (1994). Multiple measures in placement recommendations: An examination of

    variables related to course success. Lancaster, CA: Antelope Valley College. (ERIC

    Document No. 381 186).

    Long Beach College Promise. (2013). 5-Year Progress Report (2008-2013): A breakthrough in

    student achievement. Long Beach, CA: Author. Retrieved from: http://www.longbeach

    collegepromise.org/wp-content/uploads/2013/03/LBCP-5-Year-ProgressReport.pdf.

    Mattern, K. D., & Packman, S. (2009). Predictive validity of ACCUPLACER scores for course

    placement: A meta-analysis (Research Report No. 2009-2). New York, NY: College

    Board.

    Maruyama, G. (2012). Assessing college readiness: Should we be satisfied with ACT or other

    threshold scores? Educational Researcher, 41(7), 252-261.

    Marwick, J. D. (2004). Charting a path to success: The association between institutional

    placement policies and the academic success of Latino students. Community College

    Journal of Research and Practice, 28(3), 263-280.

    Medhanie, A. G., Dupuis, D. N., LeBeau, B., Harwell, M. R., & Post, T. R. (2012). The role of

    the ACCUPLACER mathematics placement test on a student’s first college mathematics

    course. Educational and Psychological Measurement, 72(2), 332-351.

  • USING MULTIPLE MEASURES TO MAKE MATH PLACEMENT DECISIONS 39

    Melguizo, T., Bos, H., & Prather, G. (2013). Using a regression discontinuity design to estimate

    the impact of placement decisions in developmental math in Los Angeles Community

    College District (LACCD). Los Angeles, CA: University of Southern California.

    Retrieved from http://www.uscrossier.org/pullias/research/projects/sc-community-

    college/

    Melguizo, T., Hagedorn, L. S., & Cypers, S. (2008). Remedial/developmental education and the

    cost of community college transfer: A Los Angeles County sample. The Review of Higher

    Education, 31(4), 401-431.

    Melguizo, T., Kosiewicz, H., Prather, G., & Bos, H. (in press). How are community college

    students assessed and placed in developmental math? Grounding our understanding in

    reality. Journal of Higher Education.

    Merisotis, J. P., & Phipps, R. A. (2000). Remedial education in colleges and universities: What's

    really going on? The Review of Higher Education, 24(1): 67-85.

    National Center for Public Policy and Higher Education & Southern Regional Education Board

    (NCPPHE & SREB). (2010). Beyond the rhetoric: Improving college readiness through

    coherent state policy. Atlanta, GA: NCPPHE. Retrieved from

    http://publications.sreb.org/2010/Beyond%20the%20Rhetoric.pdf

    Noble, J. P., & Sawyer, R. L. (2004). Is high school GPA better than admission test scores for

    predicting academic success in college? College and University Journal, 79(4), 17–22.

    Perin, D. (2006). Can community colleges protect both access and standards? The problem of

    remediation. Teachers College Record, 108(3):339-373.

    Perry, M., Bahr, P. M., Rosin, M., & Woodward, K. M. (2010). Course-taking patterns, policies,

    and practices in developmental education in the California Community Colleges.

  • USING MULTIPLE MEASURES TO MAKE MATH PLACEMENT DECISIONS 40

    Mountain View, CA: EdSource. Retrieved from

    http://www.edsource.org/assets/files/ccstudy/FULL-CC-DevelopmentalCoursetaking.pdf

    Porchea, S. F., Allen, J., Robbins, S., & Phelps, R. P. (2010). Predictors of long-term enrollment

    and degree outcomes for community college students: Integrating academic,

    psychosocial, sociodemographic, and situational factors. The Journal of Higher

    Education, 81(6), 750-778.

    Porter, A., & Polikoff, M. (2012). Measuring academic readiness for college. Educational

    Policy, 26(3), 394-417.

    Robbins, S. B., Allen, J., Casillas, A., Peterson, C. H., & Le, H. (2006). Unraveling the

    differential effects of motivational and skills, social, and self-management measures from

    traditional predictors of college outcomes. Journal of Educational Psychology, 98(3),

    598-616.

    Sawyer, R. (1996). Decision theory models for validating course placement tests. Journal of

    Educational Measurement, 33(3), 271-290.

    Sawyer, R. (2007). Indicators of usefulness of test scores. Applied Measurement in

    Education, 20(3), 255-271.

    Sawyer, R. (2013). Beyond Correlations: Usefulness of High School GPA and Test Scores in

    Making College Admissions Decisions. Applied Measurement in Education, 26(2), 89-

    112.

    Scott-Clayton, J. (2012). Do high-stakes placement exams predict college success? (CCRC

    Working Paper No. 41). New York, NY: Community College Research Center.

  • USING MULTIPLE MEASURES TO MAKE MATH PLACEMENT DECISIONS 41

    Scott-Clayton, J., Crosta, P., & Belfield, C. (2012). Improving the targeting of treatment:

    Evidence from college remediation (NBER Working Paper 18457). Cambridge, MA:

    National Bureau of Economic Research.

    Sedlacek, W. E. (2004). Beyond the Big Test: Noncognitive Assessment in Higher Education.

    San Francisco, CA: Jossey-Bass.

    Seymour-Campbell Matriculation Act (1986), California Education Code §§ 78210-78218

    Sommerville, J., & Yi, Y. (2002). Aligning K-12 and postsecondary expectations: State policy in

    transition. Washington, D.C.: National Association of System Heads.

    Stone, D. (2002). Policy Paradox: The Art of Political Decision Making. New York, NY: W.W.

    Norton.

    Texas Higher Education Coordinating Board. (2012). 2012-2017 Statewide Developmental

    Education Plan. Retrieved from http://www.thecb.state.tx.us/index.cfm?objectid=

    233A17D9-F3D3-BFAD-D5A76CDD8AADD1E3

    van der Linden, W. J. (1998). A decision theory model for course placement. Journal of

    Educational and Behavioral Statistics, 23(1), 18-34.

    Venezia, A., Bracco, K., & Nodine, T. (2010). One-shot deal? Students' perceptions of

    assessment and course placement in California's community colleges. San Francisco, CA:

    WestEd.

    Wiseley, W. C. (2006). Regulation, interpretation, assessment and open access in California

    community colleges. Retrieved from http://home.comcast.net/~chuckwiseley/

    Publications/Regulation,%20Interpretation,%20and%20Open%20Access%20(Wiseley,%

    202006).pdf

  • USING MULTIPLE MEASURES TO MAKE MATH PLACEMENT DECISIONS 42

    Table 1. Multiple measures used for math placement

    College Point Range Academic Background College Plans Motivation

    HS Diploma HS GPA Prior Math

    A 0 to 4

    +

    B 0 to 3

    + +

    +

    C N/A

    D 0 to 2

    +

    E 0 to 3

    +

    F -2 to 2

    +/- +/-

    G 0 to 3

    + +

    +

    H 0 to 4

    +

    J -2 to 5 + +/- +/-

    Note: (+) indicates measures for which points are added, and (-) indicates measures for which points are subtracted. Academic Background

    includes whether the student received a diploma or GED, high school GPA, and prior math course-taking (including achievement and highest-level completed). College plans include hours planned to attend class, hours of planned employment, and time out of formal education.

    Motivation includes importance of college and importance of mathematics. Multiple measure information was not available for one of the nine

    LACCD colleges. Source: LACCD, 2005-2008

  • USING MULTIPLE MEASURES TO MAKE MATH PLACEMENT DECISIONS 43

    Table 2. Students receiving multiple measure boost into higher-level courses (%)

    College AR to PA PA to EA EA to IA IA to CLM Total % Observations

    A 5.60 7.86 9.15 13.20 6.13 8323

    B 0.00 3.95 2.93 3.26 0.87 4470

    C - - - - - -

    D 3.50 6.29 5.49 4.23 3.80 9316

    E - 1.73 5.33 0.00 1.62 5189

    F 4.24 0.70 0.14 0.00 0.83 2278

    G - - - - - -

    H 1.97 1.71 2.43 6.09 2.66 10349

    J 27.82 20.10 15.44 20.00 13.76 4303

    Total % 5.38 5.26 4.95 5.84 4.23

    Observations 9236 13294 9593 3460 44228

    Note: Arithmetic (AR); Pre-Algebra (PA); Elementary Algebra (EA); Intermediate Algebra (IA); College-Level Math (CLM). Total percentage and total number of observations includes students who were assigned to AR (N=8645). Multiple measure information was not available for College C. College G

    uses a set of diagnostic tests and multiple cutoffs to assign students to developmental math courses, so the use of multiple measures operates differently

    from the other colleges. For further details about College G see Author (2014).

  • USING MULTIPLE MEASURES TO MAKE MATH PLACEMENT DECISIONS 44

    Table 3. Placement by math level with and without multiple measures

    (a) African-American

    College A

    College H

    w/o MM w/ MM Difference

    w/o MM w/ MM Difference

    Arithmetic 0.337 0.303 0.0339*

    0.0568 0.0502 0.00655

    Pre-algebra 0.403 0.426 -0.0235

    0.225 0.225 0

    Elem. Algebra 0.151 0.150 0.00188

    0.410 0.415 -0.004

    Int. Algebra 0.0988 0.107 -0.00847

    0.253 0.255 -0.00218

    College-Level Math 0.0103 0.0141 -0.00376

    0.0546 0.0546 0

    Observations 1063 1063 2126 458 458 916

    Pearson (p-value) 0.461 0.995

    (b) Latino/a

    College A

    College H

    w/o MM w/ MM Difference

    w/o MM w/ MM Difference

    Arithmetic 0.213 0.189 0.0241*

    0.0237 0.0198 0.00395

    Pre-algebra 0.449 0.451 -0.00192

    0.176 0.174 0.00216

    Elem. Algebra 0.227 0.231 -0.00466

    0.412 0.412 0.000359

    Int. Algebra 0.101 0.116 -0.0151**

    0.316 0.315 0.000359

    College-Level Math 0.0101 0.0126 -0.00247

    0.0722 0.0790 -0.00682

    Observations 3649 3649 7298 2784 2784 5568

    Pearson (p-value) 0.0363 0.755

    Note: Mean coefficients; Multiple Measures (MM)

    Source: LACCD, 2005-2008 * p

  • USING MULTIPLE MEASURES TO MAKE MATH PLACEMENT DECISIONS 45

    Table 4. Placement in pre-algebra with and without multiple measures

    College A

    College H

    w/o MM w/ MM Difference

    w/o MM w/ MM Difference

    Asian 0.114 0.113 0.00166

    0.139 0.143 -0.00411

    African-American 0.135 0.141 -0.00618

    0.0899 0.092 -0.00209

    Hispanic 0.516 0.512 0.00391

    0.428 0.432 -0.00457

    White 0.164 0.163 0.00101

    0.235 0.224 0.0106

    Other 0.0714 0.0718 -0.0004

    0.109 0.109 0.000146

    Observations 3179 3217 6396 1146 1120 2266

    Pearson (p-value) 0.969 0.982

    Note: Mean Coefficients; Multiple Measures (MM) Source: LACCD, 2005-2008

    * p

  • USING MULTIPLE MEASURES TO MAKE MATH PLACEMENT DECISIONS 46

    Table 5. Boosted and enrolled students

    (a) College A

    Non-boosted vs. Boosted

    Non-enrolled vs. Enrolled

    Non-

    boosted Boosted Difference

    Non-

    enrolled Enrolled Difference

    Attempted placed math course 0.636 0.61 0.0261

    Received MM boost

    0.0654 0.0589 0.00646

    Male 0.473 0.478 -0.00576

    0.502 0.457 0.0451***

    Asian 0.200 0.202 -0.00153

    0.237 0.179 0.0578***

    African-American 0.128 0.125 0.00237

    0.146 0.117 0.0281***

    Latino/a 0.436 0.469 -0.0322

    0.369 0.478 -0.109***

    White 0.164 0.147 0.0172

    0.172 0.158 0.0142

    Other 0.0710 0.0569 0.0142

    0.0759 0.0669 0.00902