A Q&A with CAPR Keynote Speaker Thomas Bailey
You can read a Q&A with Harvard Graduate School of Education Dean Bridget Terry Long, who will also give a keynote address at the CAPR conference, here.
Before he was president of Teachers College, Thomas Bailey was the director of the Community College Research Center (CCRC) and the leader of CAPR through its first several years. During his tenure of over two decades at CCRC, he witnessed—and contributed to—a dramatic evolution of our understanding of the limitations and possibilities of developmental education. In a keynote address at the CAPR conference next week, Bailey will discuss how the evidence base on developmental education has grown over time and what we know about it now. We spoke to him recently in his office at Teachers College. The following is an edited transcript of our conversation.
Can you give me a preview of your CAPR keynote?
I’ll start with an overview of the discussions that took place around developmental education reform in the 1990s and how they evolved through the 2000s and 2010s, highlighting the role research played in that evolution. In the 1990s, there were conversations within community colleges about practices that could better support students with weak academic skills. While I think we would agree with most of those practices, there’s now a greater sense of urgency for reform and a conviction that we need to rethink our whole approach to working with community college students. Twenty-five years ago, we had many ideas about strengthening developmental education, but today, there is a movement to rethink the entire system.
Research has played a fundamental role in the emergence of this reform movement. In the 1990s, we didn’t have consistent data on college completion. Longitudinal data that followed individual students was rare, and the use of more rigorous causal analysis was relatively uncommon. As those things changed, we were able to see how few community college students earned credentials. We learned that a majority of students assigned to developmental education never completed their sequences or took the relevant college-level course—and that students assigned to developmental education did not have better outcomes than similar students who were not assigned.
From there, I’ll discuss the types of reforms that emerged from these insights and the role research has played in the genesis of new practices and approaches to strengthening outcomes for students with weak academic skills—and, indeed, for all students.
How did research about developmental assessment and placement add to our big-picture understanding about dev ed?
One of the problems with developmental education 15 or 20 years ago was that colleges relied too much on easily administered standardized assessments to determine which students needed extra academic support. These assessments were used to place students into one of two categories: college-ready or in need of developmental education. But these were extremely rough measures. CCRC research showed that in some cases, a quarter of students assigned to developmental education could have earned a B if they’d been placed in the relevant college-level course instead.
At first, this seemed shocking, but in retrospect, it should not have been surprising. There is no reason to believe there’s some particular score on an assessment that can differentiate students who are ready to succeed in college-level courses from those who aren’t. Students who score just above and just below the cutoff for college readiness only differ by a percentage point or two in their probability of success. Wherever that cutoff is placed, there will be many students below it who are already capable of succeeding. The many cutoff scores and tests colleges were using further attested to the arbitrariness of these measures.
This research led to two crucial insights. First, given the placement of the cutoff scores, it was evident that college staff members were much more willing to put students into courses they didn’t need than to risk having them fail college-level courses. Second, it was clear that the system of categorizing students into two distinct groups was deeply flawed, and that placement practices needed to take into account multiple differences among students. Assessments should be tools to achieve that end, not means to sort students into dubiously defined categories.
Over the last decade, what reforms have emerged from these data-driven insights about college readiness assessments and developmental education in general?
Reforms in the last decade have in various ways recognized the urgency and significance of the problem and worked to move away from the simple college-ready/developmental dichotomy. Corequisite models that enroll students in college-level courses with some additional support have been one of the most common reforms implemented by colleges and states. Math pathways also reject the dichotomous categorization of students inherent in the traditional model of developmental education and connect math instruction for students with weak math skills with their programs and goals. And colleges are trying many different types of assessment systems that rely on multiple measures or even qualitative assessments to determine what services students need. Research has so far suggested that these are positive developments, at least improving students’ chances of getting through introductory college-level courses.
What role has CAPR research and other research funded by IES played in the growth and effectiveness of the developmental education reform movement?
IES-supported research and data collection have been crucial to the generation of the reform movement and the development of reforms and solutions. The publication of the IPEDS graduation rates, as flawed as those measures are, focused broader attention on the poor outcomes for so many community college students. The focus on rigorous causal analysis led to the discovery that many programs and practices that appeared promising did not have meaningful effects. Regression discontinuity analyses conducted by the IES-funded National Center for Postsecondary Research (NCPR) were among the first to show that traditional developmental education was not effective, at least for students with assessment scores near the cutoffs. The studies of learning communities and summer bridge programs, also funded by IES through NCPR, showed that even if short programs led to near-term improvements, those benefits tended to fade. IES has also funded important projects on developmental education in high schools, the statewide Florida developmental education reform, and corequisite remediation. CAPR studies have shown the value of multiple measures and math pathways reforms while generating insights into the implementation and design of these reforms. Finally, the CAPR national survey has indeed shown that many of these reforms are spreading.
What are the next steps?
We have identified the significance of the problem and generated a reform movement. We have tested many reforms and found that some are not effective, but we have also found many that are. We are confident that we can significantly increase the percentage of our students who complete their first-level math and English courses and achieve some medium-term positive outcomes. But we have also learned that developmental education reform alone will not significantly improve the long-term outcomes of community college students, or any students, if those desired outcomes involve securing a good job or effectively preparing for transfer and subsequent education.
We do have more comprehensive models. Research on CUNY’s ASAP model has shown that for many students assigned to developmental education, graduation rates can be doubled in a program that supports students throughout their whole college career with good financial supports; comprehensive advisement, career development, and academic support; and measures to promote a connected peer community. Guided pathways is a whole-college reform model that includes comprehensive advising but also includes the redesign of programs and majors, and connections to high schools and four year colleges. Researchers are continuing to examine whether these reforms have substantively significant positive effects when they are fully implemented for all students. Certainly, comprehensive reforms must include services to strengthen students’ academic and noncognitive skills, but those may look very different if they are incorporated into reforms that address issues encountered by all students in the college throughout their college careers.