The education field is regularly plagued by the need for common definitions as new practices emerge. As we’ve seen with personalized learning, ambiguous terms for ambitious ideas can create widespread confusion and even skepticism. By contrast, a common definition can pinpoint a “north star” that helps practitioners and policymakers focus efforts to implement a new approach. In the case of competency-based education, researchers and educators have worked for years to define the term and document its component parts in detail.
However, even when a general term has a common definition with relatively wide acceptance, it remains persistently difficult to use that term to describe what implementation actually looks like because not all components of a broad definition are always in practice at once. Last year, as we began working towards better collective knowledge on school innovation through the Canopy project, this challenge came into focus as we heard that people’s expectations for a visit to a so-called “competency-based” school often turned out differently from what they witnessed in person. These interviews gave us a hunch that simply labeling schools “competency-based” didn’t go far enough to indicate what’s actually happening in the school.
The Canopy dataset (downloadable from the project website along with a report of our primary findings), compiled from a diverse group of 235 schools across the country, offered a way to test that hunch. To build Canopy data, nominators and schools used “tags,” or keywords and phrases representing elements of school design, to describe each school’s model. The tagging system includes the more general term competency education, as well as an expanded set of “specific practice” tags that describe more granular parts of a competency-based system.
Cracking open the definition
Canopy tags can describe actual practice at a school, or the language used to describe practice, and disentangling these two possibilities is often impossible—especially when it comes to more general terms that represent a philosophy as much as a concrete design choice, like competency-based education or project-based learning. We guessed that a tag like competency education would help us see which schools describe themselves as competency-based, but would not necessarily be a reliable barometer for what practices were actually at play in the school’s model.
Indeed, the data bears out this hypothesis. In the Canopy dataset, the competency education tag was cited in about half (53%) of all 173 schools that confirmed their data. Yet the rate at which schools cited specific practice tags associated with competency-based education ranged widely: almost 75% of schools said they offer multiple opportunities for students to demonstrate mastery, yet under 40% of them report having a flexible assessment schedule or actually advancing students upon demonstration of mastery—two practices documented as key components of competency education.
This data suggests two major conclusions. First, some of the schools describing their models as competency-based aren’t implementing everything described in the formal competency-based education definition. Among the set of schools tagged competency education, for example, only 47% report having a competency framework that defines what proficiency looks like at each performance level. Second, some schools may be implementing certain aspects of competency-based education, but not describing their overall model that way. For example, among schools indicating that their grading policies focus on mastery (such as via standards-based grading that encourages a student to work to reach mastery rather than penalizing mistakes), only 75% used the competency education tag.
Understanding the varying flavors of competency-based models
These findings raise questions about varying “flavors” of competency-based education depending on which concrete practices are actually being implemented. For example, about half of schools offering multiple opportunities to demonstrate mastery report doing so without a flexible assessment schedule—perhaps meaning that while students can repeat a test to improve their score, everyone still takes tests on the same schedule. And just over half of schools reporting mastery-based grading policies do not report allowing students to actually advance to a new topic once they show mastery.
Do these “flavors” actually hint that there are distinct models for competency-based education being lumped together under a single term, like referring to both sorbet and gelato as “ice cream”? If so, researchers should make efforts to codify the competency-based models that are emerging. “Competency-based education” may still be the banner under which these models fly, but understanding the mechanics of each distinct approach may be critical to documenting how, and under which circumstances, they scale. Such research in the competency-based arena would be akin to the way the Christensen Institute documented blended learning models as the practice emerged and took root.
Or do these competency-based “flavors” represent earlier stages on a school’s journey to a more advanced model, like making vanilla ice cream before trying a recipe for Rocky Road? In this case, researchers building data on competency-based schools should track concrete practices actually being implemented, to paint a more complete picture of where a school is on the journey towards a more advanced model. For example, I spoke to one person who looks for multi-age classrooms as a tell-tale signal when trying to gauge how far a school has moved towards a competency-based model. If a school has truly moved away from seat-time-based models and social promotion, then an observer would expect to see multiple ages learning together rather than age or grade-level as the primary organizing principle. Among the schools tagged competency education in the Canopy data, 48% were also tagged multi-age classrooms.
Identifying patterns across a granular set of practices can begin to shed light on variations in what schools are actually doing under the banner of more generalized language, however well defined that language may be. To accomplish this, the research community must advance our ability to capture the details of school practice in consistent ways. If we don’t, we’ll risk any hard-won common definition of competency-based education becoming no more than hot air.
This piece originally published on eSchool News here.