On Monday, I had the opportunity to take part in a spirited debate about blended learning at the iNACOL Symposium in Orlando. The debate focused on blended-learning research and what it tells us about blended learning’s effectiveness. Below is my take on how we should interrogate the efficacy of blended learning in a manner that actually helps practitioners solve their most acute problems.

Does blended learning work?
A common question we hear from practitioners and policymakers is, “does blended learning work?” This is the wrong question to be asking. It inherently treats blended learning as a singular intervention that we could evaluate through A/B testing to yield a clear “yes” or “no” answer. Blended learning, however, is a delivery mechanism that represents a variety of school models that incorporate a wide variety of approaches to teaching and learning. Like any effort to deliver instruction, we’ve witnessed both good and bad implementations of blended learning. As such, the answer to the question, “does blended learning work,” tends to be a downright dissatisfying, “sometimes it works, sometimes it doesn’t.” And although that answer may be technically accurate to researchers, let’s be honest: “sometimes” is rarely useful to real leaders making real decisions about how to tailor instruction to their real students’ needs.

Our research suggests that we should think of blended learning as a delivery model that schools choose to implement to solve a specific problem—perhaps to increase small group instruction, perhaps to offer courses otherwise impossible to staff, or perhaps to save money. As such, specific blended implementations should not be evaluated in a vacuum; they should be assessed against their efficacy at solving the specific problems they were designed to solve.

With a clear sense of the problems that they are trying to solve, schools may be able to hone in on particular blended models that are better suited than others. Our research suggests that some blended-learning models are best suited to increasing raw performance against our current metrics, while others offer new value propositions like greater access or flexible pacing. Certain models—such as the Station Rotation, Lab Rotation, and Flipped Classroom models—will be easier to implement as solutions to core problems to boost traditional performance metrics like seat-time and average test scores. These models, by definition, layer online modalities on top of the traditional classroom. Other models—such as the A La Carte, Enriched Virtual, Flex, and Individual Rotation models—are better suited to tackling problems when schools’ current alternative is nothing at all. This helpful rubric from Michael Horn and Heather Staker’s book Blended, offers a good starting point for matching a model to particular goals.

But really, does blended learning work?
To move beyond chronically ambiguous “it depends,” we need to approach our questions and research methods in new ways.

Integrating technology into classrooms, when done well, can mark a sharp departure from the indisputable temptation to teach to the middle in analog classrooms. Children lose when we don’t know what they do and don’t know; when we move them through material too quickly or too slowly; and when we insist that if traditional teacher-led instruction worked for us, it should work for them. Blended-learning models stand to offer more precise data that can facilitate more precise differentiated instruction.

But to deliver on this promise, we shouldn’t presume that single technology tools will drive learning in equal measure among all students. That assumption falls into the trap of traditional education research that asks what works on average, or what is best at teaching to a non-existent middle. If we measure tools for average efficacy, we risk focusing on technology tools that digitize our traditional practices, rather than on seeding breakthroughs in differentiated instruction.

To build a school system that takes advantage of technology to support different students in different circumstances in real time, we need to move away from measuring average outcomes from singular tools. Instead, we should sort software tools fueling blended models on the basis of their diverse advantages and drawbacks, rather than measuring tools against a one-size-fits all yardstick. Practitioners and researchers alike need to get savvy about capturing what works for what students in what circumstances. Some EdTech tools might be great for in-class practice exercises but terrible for homework help. Likewise, some EdTech tools might only engage students with certain interests. Our current research methods fall short on yielding such insights.

This approach could radically shift our ability to discern what is and isn’t working in the online modalities within blended models with a precision that the “what works” chorus of the past decade hasn’t nailed. This, in turn, would furnish us with information to use technology to personalize in a nimble manner that optimizes for individual student mastery.

The same approach to study teaching could likewise help shed light on effective face-to-face practices in blended settings. In theory, blended learning should be changing and unlocking new teacher-student interactions, and we should be studying those with the same rigor with which we are evaluating software tools. We have decades of effective teaching research that can inform these questions. But just as with software, we should start to sort effective teaching practices beyond average outcomes, to inquire what sorts of teaching works for which students in which circumstances.

Author

  • Julia Freeland-Fisher
    Julia Freeland Fisher

    Julia Freeland Fisher leads a team that educates policymakers and community leaders on the power of Disruptive Innovation in the K-12 and higher education spheres through its research.