Home

Enrollment algorithms are contributing to the crises of higher education

Alex Engler / Sep 14, 2021

As colleges look for any advantage in the fierce competition to enroll students, hundreds of institutions have turned to algorithms to increase enrollment—especially for using scholarships to convince prospective students to attend. In a series of interviews, representatives from three different algorithmic enrollment vendors independently noted their goal was to “build a better mousetrap.”

It makes sense that so many colleges, over 75%, use data analytics for enrollment—it’s where the money is. Beyond financial stability, enrollment analytics are necessary for institutional planning, such as preparing sufficient student housing and ensuring course availability.

Yet, as I learned more about how these algorithms are used to award financial aid in higher education, I became quite concerned. The webinars, documents, and academic studies I saw all suggest the same deeply troubling problem: these algorithms intentionally reduce scholarships, and in doing so, may contribute to the crises of student loan debt, college dropout, and racial inequities. If it is not already clear, when industry representatives talk about using enrollment algorithms to build a better mousetrap, the students are the mice.

My review of the vendors, such as EAB, Ruffalo Noel Levitz, Rapid Insight, Capture Higher Ed, and Othot, discovered at least 700 higher education institutions that procure algorithms to allocate scholarships to entice enrollment. While universities have previously used scholarships to persuade students, these algorithms are more effective than their manual predecessors. A recent paper from researchers at the University of Washington shows how implementing this algorithmic process at a large unnamed public university (I’ll let you figure out this riddle) improved out-of-state applicant yield by 23.3%. A simulation study from the Southern Illinois University Carbondale (SIUC) found similar results, and vendor claims echo these statistics. One case study from Othot claims that its analytics enrolled 173 additional freshmen at the New Jersey Institute of Technology (NJIT) without a corresponding rise in scholarships.

Since this market is largely driven by procurement, the perspective of the vendors is especially important. Unfortunately, their clear focus is on raising net tuition, even at the explicit cost of student support. Like at NJIT, another case study from EAB takes credit for a 33% increase in net tuition and a six percentage point decrease in scholarship rates at Aster University. A vendor webinar referred to this as the “right amount of financial aid to lure in that fish.” While these vendors are reflecting the priorities of colleges, the routine references to students as game to be trapped are still, to say the least, discouraging.

The academic research bears out the same conclusions. In both studies mentioned above, the algorithmic strategy lowered per-student scholarship disbursements. When implemented at the mystery university, the algorithms even led to a new lower scholarship tier (4-8% of tuition) that had previously not existed. The study at SIUC suggested it would have the same effect, but a leadership change prevented the algorithm’s deployment.

It isn’t surprising that these algorithms are effective in generating tuition. The machine learning process is built for prediction, so it excels in knowing how scholarships affect likelihood to enroll, at least across many students. Additionally, optimization algorithms can easily evaluate thousands of scholarships disbursement strategies, finding the one that will lead to the highest tuition. Combined, they offer a powerful informational advantage to colleges, strengthening their position in a financial negotiation with students. As more colleges adopt these tools, they may be furthering a pre-existing decline in the percent of financial need met by colleges.

This leads directly to a disquieting second problem. The funding a student receives doesn’t only affect their likelihood to enroll, it also affects their likelihood to graduate. One study found that a thousand dollars in aid increases graduation odds by around one percent, and the effect is higher for need-based aid. More dauntingly, an additional $1,000 in unsubsidized loans reduced likelihood to graduate by over 5% for low-income students, which, the study notes, is “the largest negative factor for all aid estimates.”

Only 62% of full-time college students graduate within six years. College dropouts with debt are $14,000 in the hole on average and just under half were able to make payments. Considering the high percentage of college students who drop out with debt they are unable to pay, it is a matter of national concern that hundreds of higher education institutions may be optimizing scholarships to entice students to attend, rather than to succeed or graduate.

Like other AI systems, enrollment algorithms are also susceptible to discriminatory outcomes. Notably, the combination of predictive and optimization algorithms makes the danger of discrimination more subtle and complex. In this case, bias would come in the form of differences in the marginal effect of aid on different groups’ likelihood to enroll. So, you would expect biased results against Black applicants if providing more aid to Black applicants had a smaller effect on their likelihood to enroll than providing more aid to white applicants. This might happen if, perversely, Black applicants needed more aid than white students to meaningfully raise their likelihood to enroll.

This is especially concerning in light of the existing racial disparities in higher education, even at public institutions. The Hechinger Report recently identified fifteen public flagship universities that underserve their state’s Black population by at least ten percentage points. Black and Hispanic students lag behind white students in graduation rates too—by over 20 percentage points and 10 percentage points respectively. Colleges should be especially careful not to implement algorithmic systems that may perpetuate these inequities, making sure to run bias audits and avoid an over-reliance on GPA and SAT scores.

These are not the only potential issues—I discuss these and others in a new Brookings Institution report—including how using historical data to base expectations of what future students should be like might undermine diversity of thought and life experience in higher education.

Because of these challenges, colleges should implement careful processes around the use of enrollment algorithms. For one, colleges should never use these algorithms in either the admissions process or in awarding need-based aid—these determinations should only be made based on an applicant’s merit and financial circumstances, respectively. Since algorithms are incapable of holistically evaluating a college applicant’s candidacy, any review and scoring of application quality should be exclusively done by humans.

Rather than only looking at enrollment, colleges should also predict how scholarship funding will affect students’ ability to thrive and graduate. The vendors have the capacity to identify many students who aren’t receiving enough aid to succeed—one senior executive told me they often know when “that kid shouldn’t come.” Colleges should therefore provide the necessary data to vendors, insist on analyses of student success (which some vendors offer), and be prepared to make tough decisions about which students they can sufficiently support.

Policymakers have a role here, too. State lawmakers can press public universities to be more transparent in their use of enrollment algorithms, and even consider commissioning independent assessments of their use and impact. The U.S. Department of Education could issue best practices and technical guidelines on the equitable use of these tools.

These algorithms do not drive the institutional pursuit of tuition. Colleges are in an ever more competitive market for enrollment, and fighting for tuition has long been the norm in higher education. Still, the impact of these algorithms is significant. Domestic institutions of higher education are adopting algorithmic price fixing based on willingness to pay—a practice that fostered outrage when it was used for staplers, much less the nation’s primary engine of economic mobility. The underlying assumption of these algorithms, that it is reasonable and moral to determine the cost of higher learning by willingness to pay, is wrong—and we should be more hesitant to encode it into our educational system.

Authors

Alex Engler
Alex C. Engler is a Fellow at the Brookings Institution and an Associate Fellow at the Center for European Policy Studies, where he examines the societal and policy implications of artificial intelligence. Previously faculty at the University of Chicago, Engler now teaches AI policy at Georgetown Un...

Topics