In a recent New York Times piece in which he vented about the lobbyists that hold government accountability at bay on behalf of the nation’s colleges and universities, Kevin Carey took note of a specific provision in a proposed new version of the Higher Education Act.
The provision “would require accreditors to establish minimum benchmarks for student success in graduating and getting jobs, though it does not specify what they should be,” Carey wrote.
Carey chronicled how the trade group of regional accreditors “weighed in, complaining that the benchmarks would have to be ‘numeric.’ If one of the group’s members decided that, say, 10% was an adequate graduation rate, it said, the education secretary could reject the benchmark as ‘too low.’ ‘This is a responsibility that should not be in the purview of the federal government,’ the group declared.” The letter Carey cited went on to decry the nature of one-size-fits-all metrics and the like—which I’m sympathetic to, but, as Carey noted, the metric even allows for different standards for different types of degrees.
By not specifying a minimum benchmark and allowing for variation, this contested provision is actually an opportunity that higher education ought to embrace rather than fight, as it could create a standard that is far more reasonable than an all-or-nothing bar that is reflective of institutions’ different missions.
As I noted in a white paper titled “Disrupting College” for the Center for American Progress and in an op-ed in The Washington Times almost a decade ago, policymakers should change access to federal funding from the all-or-nothing one of today to a sliding scale based on how one does relative to its peers on these dimensions.
An all-or-nothing scheme is problematic for two reasons. First, there is less demand-side pressure on institutions that just clear the bar because it’s no easier to receive financing for schools that offer higher value than those that offer lower value. And second, that bar can never be too high in the current system that is addicted to, and dependent on, federal financial aid or else it just eliminates access for many students—a key goal, for better or worse, of the federal government for decades.
A better way forward would be to establish different quality-value indices for different institutions. Each index would be made up of a set of standard measurements, akin to those we’ve developed at the Education Quality Outcomes Standards Board, and could range from completion measures to value-added earnings and return on total investment metrics (by the student, guardian, and taxpayers) and from learning outcomes to retrospective student satisfaction.
From there, the better a school performed on these measures compared to its peers, the higher percentage of its educational operation it could finance with federal aid — thereby eliminating the all-or-nothing access to federal dollars and encouraging students to make decisions based on quality and cost, which would drive institutions to innovate. It would look something like this:
Hypothetical percentage of revenue that can be drawn from federal Title IV funds
Top 25 percent programs | 100 percent |
50-75th Percentile | 90 percent |
25-50th Percentile | 75 percent |
10-25th Percentile | 50 percent |
5-10th Percentile | 10 percent |
0-5th Percentile | 0 percent |
To be clear, that means the minimum benchmark would be based on the performance of the total set of comparable programs, as it would be based on a percentile. For example, programs in the bottom 5 percent, perhaps, wouldn’t be able to access federal financial aid—and that’s where accreditors would set the bar.
This would do three things. First, it would cause programs to compete with each other to improve and jump into—or remain in—higher tiers. This would likely lift the mix of programs over time. Second, it would constantly recalibrate to take into account macro-economic conditions and the like that, for example, might cause a cohort of students in a certain year to struggle to get jobs because of a recession. Third, students would feel the pressure to make smarter investment decisions in their education based on the historical value of that investment because it would be easier to get financing to schools that offer better value.
This would leverage the power of the market such that perhaps it could be politically palatable to those in the Trump administration that Carey fears are allies of the college lobby.
One last point—Carey is right that higher education institutions reflexively fight any accountability measures, just as in any industry. And the not-for-profit and public institutions have been particularly successful at winning these fights (the for-profit universities, incidentally, have struggled). Because of this, we can take a page from the Disruptive Innovation playbook to suggest another way to implement a provision like this.
Instead of trying to fight the established order, Congress could create a limited pool of government funds that bypasses the accreditation process in which experimental innovative programs willing to hold themselves accountable could try out these new mechanisms over a 5-to-8 year window. In essence, rather than managing the outcomes that we do not want to see, policymakers would be seeking to unleash innovation by setting the conditions for good actors that improve access and value, be they for-profit, nonprofit, or public.
If the experiment works with minimal unintended consequences and the programs deliver real value, then policymakers could allow them to gain share and work with traditional institutions and accreditors to extend the new framework. So long as we remain stuck in our current college federal financing scheme, it’s an experiment worth trying.