Home

Donate

AI is on Campus, Promising Access at a Cost

Laurie Johnson / Jan 13, 2025

Elise Racine / Better Images of AI / Morning View / CC-BY 4.0

Laurie Johnson is Director of the Campus Honors Program and Professor of German at the University of Illinois at Urbana-Champaign, and is a Public Voices Fellow through the OpEd Project.

The recent release of the bipartisan House Task Force on Artificial Intelligence points to a surge in national interest in generative artificial intelligence, from the federal policy-making level to individual laptops, and has the stated intention to inform “the American people on the advantages, complexities, and risks of artificial intelligence.”

At the large state university where I am on faculty, discussions about the impacts of generative artificial intelligence on higher education have moved quickly from being mostly about how likely students are to use AI to cheat to focusing on AI’s potential for providing efficient and enhanced access to knowledge. As we navigate this rapidly changing landscape inside and outside of the classroom, we must address the costs of the broadened access to information that AI platforms offer. While faculty and administrators at my institution are committed to exploring the knowledge and opportunities AI can help provide, there is cause for serious concern, and not merely about whether students will use Chat GPT and other platforms to cheat.

“Access” is a huge topic in higher education right now, especially after the Supreme Court’s June 2023 decision to end affirmative action. The fear that access to education and all the opportunities it offers will be denied unjustly to large segments of the population has manifested in a number of studies demonstrating the value of access and suggestions for creative ways to support underrepresented and underresourced students. The capability of AI to quickly transform complex topics into more readily digestible content is seductive, as that capability seems to promise more “access” to knowledge—and therefore may masquerade as a solution to much bigger problems of social inequity.

To be sure, platforms and programs like Chat GPT and Google Translate can provide knowledge access to students who, say, live in rural areas and/or attend under-resourced schools. For example, a student I know attended a high school that did not offer German, so he used Duolingo (a program that relies on AI) to learn enough to credibly apply to the university with an indicated interest in continuing with the language. He has taken advantage of the opportunities we offer to become a successful student of German literature, culture, and language. AI was a component of a set of experiences the student put together to increase his access to opportunity. In another example, high school teachers now can make productive use of AI to create updated learning materials and exercises—which, in a typical seven- or eight-hour school day filled with lessons on a variety of levels, can free the teachers to focus more on their own pedagogical development, on learning more about their fields, or on their students’ wellbeing.

The promise of expanded access to knowledge and more efficient use of time is what drew me to begin experimenting with a few AI platforms. I recently uploaded an essay I wrote on German philosophy into the Google AI tool Notebook LM. In under four minutes, Notebook LM converted my twenty-two-page article into a freewheeling eight-minute conversation between two very human-sounding AI voices. The conversation, intended to be in the style of a podcast, opened with the male-sounding AI voice asking: “Hey, are you ready to get into some deep stuff?”

My initial scoff at the characterization of my work as “deep stuff” turned to surprise as the podcast conversation proceeded to make my exploration of the political aspects of transcendental philosophy quite easy to understand—arguably more accessible. However, the podcast also relied on pat, standard language patterns that are the result of the AI’s process of feeding text into huge datasets of language tokens that in turn generate an essentially frictionless set of statements. My colleague Leif Weatherby has called this “‘language as service,’ packaged and prepared…channeled into its flattest possible version so as to be useful to those who mainly use language as liability control.” Students who use Notebook to, say, turn their neural engineering lectures into podcasts might have the feeling they are getting increased access to knowledge, while instead, they are getting a smooth, palatable set of expressions that rely far too much on trite analogies and supposed even-handedness.

AI can plumb the Internet’s collective resources and deliver excerpts of those resources to anyone, anywhere, who has the tools. But it is clear from the papers students ask Chat GPT to create for them that the excerpts and simplifications AI chooses are not only often overused and clichéd—AI often repeats phrases that are most-repeated phrases—but also often simply incorrect: for example, Chat and other platforms get the ambiguous endings of many novels wrong.

And, the access, efficiency, and augmentation AI offers come at a literal cost. You can try Notebook LM and other AI platforms out for free, but if you want to do more, you must purchase a subscription. Questions about intellectual property currently are in a somewhat murky zone: when a professor at the University of California-Los Angeles recently used the AI platform Kudu to create a textbook and activities for a medieval literature course, the ensuing chatter was partly about whether the professor was permitting AI to do too much thinking, but also about the fact that Kudu (which was founded by another faculty member at UCLA) now has a license to distribute that course material at will. The professor, however, notes that the Kudu-created textbook is cheaper than traditional texts and that students will be able to access it long after the class is over.

AI is used on campus and off, every day. The challenge is to continue to engage our human intelligence to evaluate where the machines help us, and where they fail. “Access” can mean providing more knowledge to students with fewer resources. But access is not inherently a virtue. It is important that we not become unthinking consumers complicit in lowering the expectation that we think critically and carefully, on- and offline.

Authors

Laurie Johnson
Laurie Johnson is Director of the Campus Honors Program and Professor of German at the University of Illinois at Urbana-Champaign and is a Public Voices Fellow through the OpEd Project. She holds affiliate appointments in Comparative and World Literature, Criticism and Interpretive Theory, Global St...

Related

Is Generative AI the Answer for the Failures of Content Moderation?

Topics