Will eliminating the ‘F’ eliminate bad school design?


Jul 6, 2016

The dreaded “F” is going out of vogue in schools. This week’s Washington Post article, “Is it becoming too hard to fail?”, chronicled a host of K–12 school systems that are moving away from the age-old tradition of failing students whose work doesn’t cut it, in hopes of keeping students motivated and on the road toward graduation.

The article, however, does not answer the most important question that these new policies must consider: by eliminating the “F,” are students in turn less likely to fail?

There is an obvious tautology to this question. The answer depends on how we measure failure, if not by letter grades. The reality is that in our current system some students may not master a semester’s worth of Algebra or social studies in the time allotted before a final exam determines their grades. Simply eliminating bad grades does not minimize that fact. Commentators like Mike Petrilli are right to point out the risk, then, that making it impossible to fail reeks of the “soft bigotry of low expectations.”

But skeptics of eliminating failing grades must likewise acknowledge that our current grading system perpetuates school designs that are already failing to ensure students’ long-term success. Indeed, according to the most recent National Assessment of Educational Progress (NAEP) results, just 37 percent of high school seniors are prepared for college-level math and reading. These low levels of performance are disappointing but not surprising if we pause to think about the fundamental structure of our K–12 education system. By design, we move students forward grade by grade based largely on the amount of instructional hours they have spent in class—dubbed “seat time”—rather than their mastery of academic skills and content. This structure permeates even week-by-week instructional methods: as schools rush to cover the bevy of standards on state tests each spring, and as teachers instruct students spanning a wide range of mastery levels, classes tend to move forward to new course material regardless of whether students have proven that they understand the concepts covered in the days and weeks prior.

As a result, gaps in understanding are compounded, predictably accumulating to the point that by the time students are 18, far less than half are ready for college-level learning. These are not just “F” students: even by giving a student a 70 percent on a test and continuing onwards, we are letting gaps accumulate. As Sal Khan, educator and entrepreneur, aptly put it, this foreshadows a failure further down the line: “We are telling students they’ve learned something that they really haven’t learned. We wish them well and nudge them ahead to the next, more difficult unit, for which they have not been properly prepared. We are setting them up to fail.”

There is an alternative to these sobering results. Competency-based learning models taking root in a small minority of K–12 schools advance students based on mastery, rather than seat-time, and allow them to progress through courses at a flexible pace.

This approach can have big implications for what grades themselves mean, as Chris Sturgis of CompetencyWorks has written about extensively. In a truly competency-based system, if an assessment demonstrates that a student doesn’t understand some proportion of material (expressed in, say, a “C” or “70 percent” in traditional grading lingo), the student must revisit that material until he can demonstrate mastery of the remaining 30 percent of concepts. Competency-based schools work hard to ensure that students receive just-in-time supports when they are struggling. Some competency-based schools are even doing away with age-based grade levels entirely, treating learning as a continuum, and holding multiple graduation ceremonies each year to award diplomas when students are ready.

Grading reforms like those profiled in the Washington Post’s article will mean little if schools fail to adopt competency-based structures to provide just-in-time supports for students who might otherwise languish on the brink of failing. This requires more than simply churning struggling students through end-of-course “catch up” or making them take the same test over and over until they pass. Instead, classrooms must be fundamentally redesigned to fill gaps in understanding in real-time and allow students to move at a flexible pace that accords with their understanding.

That level of individualized support and flexible pacing can be tricky for a single teacher responsible for many students to pull off. As our research has shown, technology can make these wholly new models feasible at scale. Online content can offer a continuum of learning along which students can progress at a flexible pace. It also can be deployed in a more modular manner than traditional face-to-face instruction, which in turn offers students multiple pathways to mastery, as opposed to a single lesson or textbook that a whole class must sit through at the same rate. Additionally, using online assessment tools, testing can occur on-demand—that is, when students are ready to be assessed, not before or after. Finally, technology tools can capture richer data on where students are failing to master concepts, making it easier for teachers to target both online and face-to-face supports accordingly.

Without delivering new structures and supports, by getting rid of the failing grades we risk hollowing out the value of grades themselves and, ironically, failing too many of our students in the long run. To cross the chasm between grades and true preparation for college and beyond, our systems will need to do more than eliminating the doomsday “F”: they must move away from measuring progress in instructional hours and instead embrace competency-based approaches.

Julia Freeland Fisher

Julia is the director of education research at the Clayton Christensen Institute. She leads a team that educates policymakers and community leaders on the power of disruptive innovation in the K-12 and higher education spheres.

  • Julia, thank you for advancing this topic. In my district we have been engaged in multiple discussions: 1. What is the purpose of homework? 2. How should students be graded/evaluated/assessed? As a middle school principal, I support and encourage ongoing meaningful feedback from teachers to students This feedback does not necessarily include a “letter” grade. We have found feedback to be more valuable for sustained long-term individual student growth, rather than letter grades with no or limited feedback.
    I support the concept of competency-based schools as a theoretical approach, however practical support from policymakers is lacking as students are often “locked” into a grade level based on an age metric as opposed to their individual ability to demonstrate content mastery.

  • Julia — This is an excellent response. Thank you.

    You scare me how quickly you turned this out. I’ve still been contemplating how to respond to the Washington Post’s article (WP really needs to beef up their capacity to report on education reforms — this is the second example of not doing enough research to understand the context of what they are reporting on). I think the major point that the authors failed to understand is that schools should be focused on learning not just who passes and fails. That focus on failure is totally “old school”.

    FYI for those that want a better understanding of how to implement new grading policies effectively you can start with the two part series on Cworks based on what we’ve learned from schools across the country.


  • Just a quick comment from a corner of the higher education world: a strong cohort of colleges in the US innovation sector has operated effectively without letter grade evaluation for over a half century, and without the disarray that some individuals immediately predict. Schools like Evergreen State, Prescott College, New College of Florida, Hampshire College, Marlboro College, and others — many of which have operated under the banner of the Consortium for Innovative Environments in Learning — have prioritized a mix of competency and narrative evaluation standards. These schools have a student-centered practice that effectively entangles traditional approaches to liberal learning and employment readiness and have rich datasets about how students learn in environments without punitive tracking mechanisms. Hampshire, in particular, has been studying the ways in which students thrive in such environments and are developing better predictive models about advising, mentoring, and recruitment to maximize success. There may be good lessons in thinking about competency based approaches in K12 and vertical alignment with post-secondary by engaging this small sector’s work with professional education domains (Law, Medicine) deeply connected to GPA-based systems. I believe strongly that the successes of this small sector — never full-fledged disruptions because of the ubiquitous shaping arm of Title IV delivery — have much to teach (and learn from) the K12 sector.

  • Moreen Carvan

    Hi Julie! I enjoyed the panel presentation at HLC in March, and I’ve been following ever since. You’ve made a point that educators have been making for years – grades are not good indicators of learning, though some of the assessment contributing to the grade might be. As Jim Hall has noted, there is precedent and evidence that thoughtfully designed learning and assessment cycles provide effective evaluation of both learning and development over time.

    I’ve been working with a colleague on a framework for design of learning and assessment for mastery for post-doctoral learning in complex work settings. Grades have no meaning in such a context. Evidence emerging from real challenges does have meaning, and can inform the learning of a whole cross-sector cohort.

    I think that grades persist because of a reinforcing cycle of guarded miscommunication based on assumptions – and it might be time for an accrediting body to take a stand on this issue.

  • Lou Coenen

    Both Jim Hall and Moreen Carvan have really raised valid points. For some disciplines and level, the demonstration of competency is relatively straightforward. However, the more further the learner progresses, the harder the challenge is.

    The “competency based” model ultimately represents the desired outcome but it would be helpful to hear how readers who are actively using this approach address the following two questions:

    How does one develop the teaching and assessment materials in a competency-based environment when there is a requirement for the teaching institution (and teacher) to have a finite, quantifiable (and defensible) outcome within a defined time period – for both social and fiscal reasons.

    What determines the appropriate competency “demonstration” to receive scalable “credit” especially in disciplines where there is rapid change or are reliant on “soft” skills?