This week the U.S. Department of Education made a groundbreaking decision to allow four school systems in New Hampshire to pilot a new accountability regime based on a mix of local and state assessments. This first-of-its-kind policy marks an important policy development for competency-based systems and signals a move in the right direction for federal accountability.
New Hampshire’s Performance Assessment for Competency Education (PACE) pilot will allow locally managed assessments to count toward federal accountability requirements. New Hampshire’s PACE project began in 2012 as an opt-in effort for districts to coordinate local approaches to performance assessment. Starting this year, the four PACE implementing districts—Sanborn Regional, Rochester, Epping, and Souhegan—will administer the Smarter Balanced assessment once in elementary school, once in middle school, and once in high school (in three grades instead of seven). In all other years when students aren’t taking Smarter Balanced assessments, the PACE districts will administer carefully designed common and locally managed “performance assessments” that were developed by the districts themselves and validated at the state level.
Although there is a range of definitions of what constitutes a performance assessment, according to the New Hampshire DOE, “[p]erformance assessments are complex, multi-part tasks that ask students to apply what they have learned in sophisticated ways.” The state emphasizes that different mediums may qualify as evidence of mastery. The Department explained that these assessments vary by context and subject, and sometimes by a student’s particular interests:
For example, in English, middle school students might submit research papers showing that they know how to analyze and present information from many sources. In math, fourth-graders might design and cost out a new park and write a letter to their board of selectmen arguing their perspective based on their calculations and other evidence.
Given the obvious variation introduced by this range of performance tasks (as opposed to multiple-choice standardized exams) and a broader definition of what constitutes mastery for a given student, state-level validation and Smarter Balanced assessments will function as a systems audit on the quality and consistency of these locally designed tests.
It’s not surprising that the state furthest along in moving to a competency-based system—in which students advance based upon mastery, rather than seat time—is leading the way to new testing regimes. A new approach to assessment sits at the fulcrum of any competency-based approach. The notion that students should advance upon mastery suggests that assessments need to be administered on an on-demand basis (when students are ready), rather than to a cohort of students at the end of a unit or course. Additionally, in many competency-based systems “mastery” may be demonstrated in a variety of ways. In other words, competency-based tests are a highly integrated part of learning and allow students to show what they know when they are ready to do so. This marks a sharp departure from the current testing paradigms in most classrooms, reified by federal accountability rules, which rely on a static snapshot of mastery or lack thereof. As such, the PACE pilot offers one step toward aligning competency-based assessment to our federal accountability regime, which has historically focused on once-yearly tests as the yardstick for student performance.
The focus on competency-based education is by no means a new effort in New Hampshire; as I described in an Education Next article last month, the state has been transitioning to competency-based practice for over a decade. This shift to competency-based models has been a gradual one. Despite a bold 2005 state policy mandating that high schools measure credit in terms of mastery rather than instructional time, the wide variation in implementation that persists today is largely due to the state’s strong tradition of local control. Many New Hampshire high school students, even under the new law, still experience cohort-based instruction, testing, and grading.
But this may be changing. Early on, the state struggled to balance mandating competency-based approaches with allowing local districts the freedom to shape their own models. Yet, the PACE project is an important new chapter in this narrative. The state is taking increasingly deliberate steps to provide technical assistance and collaboration opportunities among those school systems embracing competency-based education. In addition to the PACE project, the state has built a number of “networks” focused on tackling various challenges to implementing competency-based, personalized systems such as professional development and differentiated instruction.
Perhaps most important for other states and districts, the PACE pilot signals that federal policy may be inching toward a more flexible take on testing, while still maintaining sharp focus on holding schools accountable. This is precisely the approach that Michael Horn, Thomas Arnett, and I suggested in a blog post last week. As Congress wrestles yet again with NCLB reauthorization, it needs to leave room for innovations in assessment that are bound to arise in the near future. We particularly emphasize how technology stands to play a key role in this development toward frequent formative assessments that could eventually tell us more accurate and reliable information about student performance than summative tests.
Interestingly, the PACE pilot is not a waiver, which has been the Department’s rather blunt instrument for making space to modernize NCLB requirements in the absence of reauthorization of the law. Instead, all other districts in New Hampshire remain obligated to follow traditional federal testing rules. This is a promising approach given that new models of personalized-, competency-based, and blended-learning tend to arise at the district, rather than state, level. Leaving room for these local efforts allows practitioners to continue to develop innovative approaches to teaching and learning, while also acting as an R&D engine for what state and federal accountability policy could eventually look like as these models spread. And vitally to the federal testing debates on the Hill over the past few months, these experiments in assessment can lend the nation a vision of a future of testing that is both more humane and a more accurate benchmark of individual student mastery.