This contributed guest piece is by Christian Talbot, President & CEO of MSA-CESS

In December 2022, I needed only one practice session with ChatGPT to realize that generative AI (GenAI) was going to become the most transformational technology for education since the public internet. 

Unlike previous technology waves, which had accelerated information transmission, GenAI could function as a co-creator and a thought partner. 

So, a few months later, in my role as President of the Middle States Association of Colleges & Schools, I recruited an advisory team to create an accreditation model for responsible AI in learning (RAIL).

I opened the first meeting of that team by asking, “How might we responsibly integrate AI into learning?”

Tom Vander Ark, a member of the advisory team and the CEO of Getting Smart, raised a finger. 

“Before we discuss that,” he said, “can we talk about language? Are we committed to the word ‘integrate’?”

Before long, Tom and Amanda Bickerstaff, the founder of AI for Education, were riffing on the three-plus decades of edtech’s failure to change learning outcomes (something that Tom wrote about in “Unfulfilled Promise” for the Hoover Institute).

“If we focus on ‘integration’,” Amanda said, “we’re going to get more worksheets. Kids don’t need to do more worksheets in school.”

Tom put a fine point on things: “We have a window of opportunity for schools to shift from asking ‘How do we integrate AI into our school?’ to ‘What does great learning look like and how can we use AI to support that?’”

This shift isn’t going to happen on its own. Accreditation can incentivize schools to use AI to support great learning experiences. That’s why the Middle States AI advisory team created RAIL.

Acknowledging a worst-case scenario

But first, we need to name the greatest danger posed by AI to schools right now: When we talk about “integrating AI,” we are falling prey to The Borg. 

In the 1980s and 90s TV show “Star Trek: The Next Generation,” the Borg are an alien species that assimilates other lifeforms into a homogenized “collective.” In their quest for perfection, the Borg turn individuals into drones.

Compare that with a recent observation by Harvard Graduate School of Education professor Jal Mehta: Schools historically assimilate children into a standardized “grammar” of learning experiences.

A question like “How can we integrate AI?” amounts to turning on the Borg tractor beam, which pulls everything toward it for “assimilation.” 

Students learn better when they’re active, yet when we finally got the technology to project PowerPoint in every classroom, we created digital chalkboards. Students thrive in mastery-based learning environments, yet when a majority of students gained 24/7 internet access, we administered more multiple choice quizzes. 

Although technology has the potential to personalize or individualize student learning, we’ve leveraged it to continue batch processing. 

Why? 

As Jal might say, we have integrated technology into the “grammar” of content coverage, standardized testing, and traditional grades.

The Borg may have claimed that “Resistance is futile,” but accreditation is actually well positioned to resist assimilation. As Tom Vander Ark has said, AI creates a new landscape of possibilities for learners, who can now learn and do more things through AI. At the same time, accreditation can provide guardrails for safe, ethical, and human-centered use of AI.

Accreditation as a change in priorities

How exactly would that work?

Accreditation emerges from industry-defined standards of quality. To earn accreditation, a school must produce evidence that it meets those standards. (Different accreditors use different language and organize content differently, but we all validate the same essential standards.)

And as the Middle States AI advisory team was designing RAIL to enable AI adoption while avoiding “integration,” we noticed disjointed change efforts in the marketplace—meaning that accreditation standards alone would not suffice.

For example, many edtech companies are reinforcing traditional practices. Many of these products “will accelerate, automate, and scale traditional, broken methods of instruction,” as Dr. Philippa Hardman from Cambridge University has said. A teacher may be able to generate infinite worksheets, but the world is not going to reward students who have completed more worksheets.

Meanwhile, nonprofit agencies and consultancies are attempting to modify practices. If they can assist schools in crafting AI policies, then teachers and students will have permission to use AI safely. This approach is necessary but insufficient. It protects against the downside risk of AI misuse, but it doesn’t create a north star for stakeholders to pursue.

In the language of Clayton Christensen’s Business Model Theory, these efforts reflect attempts to change resources (edtech products) and processes (changes in policies), but  leave the most important—and hardest—part unchanged: priorities. 

In a school or district, priorities are the rules and culture that guide decisions about how to leverage resources and processes to fulfill their promises to stakeholders. These priorities sit at the core of the school or district’s value propositions. If priorities remain unchanged, students may receive fancier worksheets and they may be safe in their use of AI, but their learning will remain unchanged.

In other words, if GenAI has any chance of providing great, innovative learning experiences for students, schools’ priorities must evolve.

As Thomas Arnett has pointed out, however, changing priorities is a monumental challenge. We see this sometimes in the world of accreditation, when schools treat accreditation as a hoop to jump through or a marketing project rather than a mechanism to evolve its priorities.

At Middle States, we worried that attitudes of compliance or credentialism would result in schools “integrating” AI into what they are already doing.

That’s why RAIL is not just an accreditation-style endorsement, but also an implementation framework. It relies on the wisdom of Stewart Brand’s “pace layering” model, which reflects how complex, adaptive systems change (or resist change) over time.

If we apply the pace layering model to a K-12 school or district, we might imagine the following:

  • Practices: The day-to-day behaviors that bring to life a school’s programs. 
  • Programs: The courses, clubs, and sports that activate a school’s curriculum. 
  • Infrastructure: The building blocks upon which programs and practices are built. 
  • Governance: The rules for decision-making for all stakeholders. 
  • Culture & Identity: The source code for a school, where the deepest narratives live. And narratives drive priorities. Changes here are glacial because they depend upon shared understanding and commitment from diverse stakeholders.

In line with Clayton Christensen’s Business Model Theory, we designed RAIL so that schools address AI at every layer. This means that they will have to shift resources, practices, and most of all priorities. To assist further, we are drawing on best practices from change management—for example, by requiring executive sponsorship, a change strategy, and an internal and external marketing plan. (You can see a fuller list of the accreditation model, including required evidence, at msaevolutionlab.com/rail.)

Learning evolution requires accreditation model evolution 

Accreditation is a “school improvement” model, which is to say that it is designed for incremental change. But GenAI  (which appears to be accelerating at a rate far in excess even of Moore’s Law) is spurring rapid changes, for which there are no research-based best practices. 

As Kevin Kelly said in The Inevitable, “We are morphing so fast that our ability to invent new things outpaces the rate we can civilize them.”

There is too much at stake for accreditors to do nothing. That’s why we created RAIL as a nimble and adaptive implementation framework that acts as an endorsement rather than an accreditation. This avoids conflating best practices with promising practices. 

It also means we can engage with schools through faster feedback loops—the RAIL endorsement lasts for two years (vs. an accreditation’s 5-10 years, depending on the agency). And even in those two years, schools know to expect periodic “software updates” to RAIL, just as a smartphone pushes updates to its operating system.

As accreditors, our license to exist depends on our ability to inspire wise change in schools. Because most of us at Middle States are former school leaders, we have learned our lessons about implementing change the hard way. So we know that a modernized approach to accreditation—one that accounts for those hard lessons and that reflects Clayton Christensen’s insights about what it takes to innovate—can help us meet this moment.

The great Harvard biologist E.O. Wilson once said that “The real problem of humanity is we have Paleolithic emotions, medieval institutions, and god-like technologies.” Accreditation may not change humanity’s emotional hard wiring or put the technology genie back in the bottle, but we can transform accreditation from a medieval institution into a nimble one worthy of an age of abundant AI.

Our schools are counting on it.

Author

  • CCI Avatar
    Christian Talbot