When Clayton Christensen and Michael Horn published Disrupting Class in 2008, the current wave of education technology was still finding its footing. The book posited two predictions. First, online learning would grow rapidly in K-12 schools. But scale was not the endgame. Second, and arguably more crucial, was the opportunity ahead: with the right incentives in place, technology could scale with an eye toward optimizing for individual learners’ academic outcomes.
Fast forward a decade. The edtech industry has arguably seen what TechCrunch’s Mike Butcher aptly called a number of “false dawns.” Although investments in online and blended tools grew steadily over the years, they did not always bear fruit. This stems in part from how we’ve defined success in the market. Although technology could unlock better outcomes, that depends in large part on what schools demand of it.
When procurement decisions focus too heavily on inputs like enrolling students in online courses or filling tutoring time, it’s hardly surprising to find tools merely providing cheaper seat-time-based learning models. Pockets of the market, such as credit recovery, have largely fallen victim to this trend. Moreover, even if schools want to focus on learning outcomes, they may be purchasing tools blindly when little to no efficacy data exists to inform their decisions.
Luckily, a range of demand-driven efforts are emerging to tackle this disconnect between technology’s immense possibility and where it’s currently falling short. Here are five drivers that have the potential to tackle that gap:
1. Pool demand
One factor that makes any conversation about edtech demand fraught is the sheer fragmentation of the education market. Decisions about the particular tools, features, and functionalities occur on such a one-off basis as dictated by schools and districts that vendors are rarely pressured in a single direction by the whole of the market.
But there are promising models to combat fragmentation. Project Unicorn, for example, is a collaborative effort of school systems and education nonprofits to pool demand for data interoperability in the market. Among other things, the project consists of a simple pledge that both school systems and vendors can sign onto, which commits them to following technology practices such as adopting and integrating data interoperability standards and educating their communities about data privacy.
2. Increase transparency
Pooling demand for technical attributes is all well and good. Demanding that a tool is effective at driving desired outcomes, however, poses additional challenges. Unfortunately, given the dearth of information on how different tools operate in different circumstances, efficacy and adoption remain woefully disconnected in procurement decisions. In other words, schools may continue to adopt tools regardless of whether they are actually driving learning.
In part, solving this requires more comprehensive research on efficacy like Harvard’s Proving Ground effort on educational software. But it turns out that transparent and useful information need not only take the form of expensive research trials. Efforts to get more detailed accounts from teachers on the frontlines are equally exciting. For example, Jefferson Education’s recent transformation to the Jefferson Education Exchange(JEX)—which marks a shift from its original strategy of conducting research on edtech tools with the academic community to now soliciting data directly from teachers—marks a promising pivot to gather user insights that can tip the scales of demand toward usability and efficacy.
3. Connect end users to procurement decisions
Better information helps—but only if it is used to drive actual purchasing decisions. That leap can be tricky in a market like education where there’s something of a principal-agent problem. Those with final say about which tools to buy are rarely those who ultimately absorb the impact of that purchase.
Put differently, end users—in this case, both teachers and students—do not hold the purse strings when it comes to which tools end up getting purchased by the district or school. To bridge this gap, schools need to ensure that teachers and students are part of the procurement process, and have channels to provide regular feedback on whether tools are accomplishing what curriculum and technology departments hope they would.
4. Talk about pedagogy
A tool’s efficacy of course, will hinge on the pedagogical model it’s intended to support. As I’ve noted before, edtech debates can devolve into knee-jerk reactions as to whether technology is inherently good or bad.
In reality, the crux of the debate may be rooted in competing views of pedagogy. A highly constructivist educator may want tools that encourage exploration or don’t limit students to a single progression through a pre-fabricated curriculum. A more behaviorist educator might delight in drills and practices but do little to take advantage of project-based tools. I rarely see these distinctions debated or even spelled out (with a few compelling exceptions here and there). If we hope to really tackle fragmented demand, edtech conversations need to dedicate more time to examining competing pedagogical philosophies—and how tools do or don’t support them.
5. Test theories behind not just what works—but why
Based on our research, however, the holy grail of effective technology integration into schools will not end with a single statement of “effective or not,” or even a clarified pedagogical point of view. Instead, it will be a virtuous cycle that constantly tests and refines our theories of why particular tools—in concert with other decisions about how to use time and space, foster relationships, and measure outcomes—are or aren’t working. Without forming, testing, and refining these underlying theories, demand-driven initiatives risk getting stuck in untested assumptions.
Put differently, simply asking “what works” stops short of the real question at the heart of a truly personalized system: What works, for which students, in what circumstances? Without this level of specificity and understanding of contextual factors, we’ll be stuck understanding only what works on average, despite aspirations to reach each individual student (not to mention mounting evidence that “average” itself is a flawed construct). And without that theoretical underpinning, scaling personalized learning approaches with predictable quality will remain challenging.
This post was originally published on EdSurge.