This week saw the release of the third in a series of personalized learning studies conducted by the RAND Corporation. The research analyzed implementation, survey, and efficacy data in a sample of schools that are part of the Next Generation Learning Challenges (NGLC) portfolio, and compared that data to a national sample of schools. The findings? NGLC schools yielded some positive academic results, but educators and administrators reported numerous challenges. And in some cases, such as how often teachers reported “keeping up-to-date documentation of student strengths, weaknesses, and goals,” the researchers detected little to no difference between personalized NGLC schools versus traditional schools. (NB: For those familiar with the last 2015 RAND report on personalized learning by the same authors, it’s worth noting a crucial distinction: this new study analyzed a group of 32 schools only 16 of which were included in the set of over 60 schools in the 2015 sample, which helps to explain some divergent conclusions drawn between the two.)

Advocates for and critics against personalized learning will inevitably interpret these findings in wildly different ways.

Long-time advocates hoping to dismantle traditional factory-based instructional approaches can use the academic findings to cite the potential promise of personalized models. And they can defend apparent shortcomings by pointing out that the smaller sample of NGLC schools does not represent the numerous other promising personalized learning efforts afoot across the country. At the same time, those critical of the fever pitch around personalized learning can use the mere modest gains and numerous challenges that researchers surfaced to downplay its potential. Already, headlines reporting on the findings alluded to the ‘hyped’ or overblown ambitions of personalized learning.

But stepping back, what some would call promising or hyped I would call nascent. I’m skeptical of how much the particular research findings in this most recent report should extend to making broad statements about personalized learning in either direction. Put differently, I’m not sure the study would really give anyone reason to either celebrate or denounce personalized learning. The academic gains are positive, but hard to attribute back to specific practices or inputs. And the challenges implementers reported are real, but shouldn’t be conflated with efficacy of personalized practices themselves.

Instead, if we can acknowledge that schools are still extremely early in developing and implementing instructional models that personalize along a range of dimensions, then we should be wary of treating any research at this stage as an authoritative source on whether or not something so new and ill-defined actually “works” writ large.

This is not to say that RAND did a bad job trying to make sense of a new and evolving set of instructional and structural shifts afoot in schools. The researchers were handed a sample of NGLC schools attempting to personalize learning in a wide variety of ways. The researchers then tried to measure those practices along dimensions like competency-based learning that are themselves still hard to calibrate, and compare them to traditional schools.

As such, the research does offer a useful frame to keep those schools within the sample honest about whether they are making the radical departure from traditional practice and seeing the academic gains that they set out to make when they pursued new instructional models. But given the wide range of inputs deemed “personalized,” the findings are actually not a great way to adjudicate the value of personalized efforts as a whole because the schools’ approaches varied so widely. These “personalized” approaches, in other words, remain extremely hard to compare in an apples to apples manner.

Moreover, innovation is chronically hard to measure at a single point in time. Reading the study reminded me of the rumored story of Thomas Edison pursuing over a thousand experiments to eventually produce the first commercially viable light bulb. By analogy, RAND was essentially tasked with looking at a variety of experiments across multiple school sites, at a variety of stages of implementation. As such, the research reads as though RAND were in the lamp factory measuring all of those tiny light bulb experiments, across multiple scientists, at multiple points in time, relying in part on self-reported data–and then trying to say whether the light bulb “worked” as compared to oil or gas lanterns. That effort may be heroic but futile when it comes to making broad conclusions about the state of an innovation as complex as personalized learning. Rather, if we could dig into the data at the level of each school, the study’s findings are best taken as an accurate glimpse into the state of a numerous discrete experiments at play in the field.

There are two risks, then, in how such data will be interpreted. Naysayers may be tempted to use the findings to suggest that personalized learning is not all it’s chalked up to be–that the proverbial light bulb is itself  an “overhyped” concept. Doing so risks throwing the baby out with the bathwater (or worse, throwing the light bulb into the bath water). If Edison and his team had treated each tiny experiment along the way as the day of reckoning on the fate of the light bulb, of course the findings would disappoint. Instead, we should be looking at the research as an evaluation of an ongoing, evolving bundle of innovations, rather than one monolithic thing called “personalized learning”.

The second risk is that personalized learning advocates will likewise fall into the trap of generalizing around the monolithic term, rather than asking themselves hard questions. At times, the call for personalized learning has focused only on examples of these instructional models that are furthest along and yielding positive results. From a research perspective, if personalized learning only describes instances of positive outcomes, we risk a tautology–schools that are getting great results will be deemed personalized, while everyone else isn’t. That may give superintendents and advocates a chance to pat themselves on the back, or provide ‘proof points’ in the field, but it won’t necessarily move the needle in terms of understanding what works, in what circumstances, for which students. Put bluntly, the field needs to come to terms with the fact that there can be both effective and ineffective instances of personalized-learning implementation. And the most helpful research will not just point us to high-quality implementers, but will surface the specific circumstances and practices producing the best results and those falling short–either because the rhetoric of personalized learning isn’t matching the reality on the ground, or because the reality is that certain practices deemed “personalized’ aren’t actually driving towards positive outcomes. For that purpose, RAND’s latest study ought to be just the very beginning of a much larger body of research that will need to follow.

Author

  • Julia Freeland-Fisher
    Julia Freeland Fisher

    Julia Freeland Fisher leads a team that educates policymakers and community leaders on the power of Disruptive Innovation in the K-12 and higher education spheres through its research.