Home

Will AI Degrade Online Communities?

Kalie Mayberry / Sep 7, 2023

Kalie M. Mayberry is a social impact researcher and educator, exploring activism practices and community governance models at Berkman Klein Center for Internet and Society at Harvard University.

From generating novel recipes to drafting emails to coding, people have found various ways to use generative AI chatbots. Google Labs even helped me put this piece together, providing a short outline that I then began to edit and build out with specifics. But what is at risk by relying on generative AI is not only the many problems inherent in the technology – including bias, hallucinations, potential for spreading disinformation, and lack of transparency about how the models are trained and how they work – but also how it may affect online collaboration, shrinking and even drying out some of the robust communities that have been built over the past thirty years online.

Like many people, I have always used the internet to find answers to questions and solutions to problems – from reminding myself how many eggs to put into pancake mix to wondering if anyone else is experiencing an Instagram outage. Odds are that I can usually find someone else out there with the same question I have, likely because that person has typically started a thread on an online community-based forum to find the answer. The main problem I typically face is actually finding that thread. Sometimes remembering three words from a song I can’t recall the name of is enough to find exactly what I am looking for, but there are always times when I haven’t provided the right prompt, leaving me instead spinning my mental wheels trying to remember on my own.

What large language model chatbots such as ChatGPT and Bard now purport to do is to solve my problem of finding by predicting what it is I hoped to find in the first place. Where I might have turned to an online cooking community to ask about the number of eggs for pancakes, now I can quickly login to ChatGPT and ask the same question – and almost always receive a near-instantaneous, workable answer. Where I once had to rely on finding information posted to the internet by other humans, now the chatbot is both the intermediary and endpoint. If you had access to knowledge you knew would be lightning fast and likely correct, would you prefer it over searching for a right answer posted by a human?

This example of preferring a machine’s output to a human’s might be sharpest in the coding community. Previously, if I was working alone in any coding language (although primarily Python and R for me personally) and I ran into an error message or received the wrong output, the best resources at my disposal were community forums such as Stack Overflow or Github. These sites have served as crowdsourcing platforms for people with questions about code for over 15 years, with tips and solutions sourced from all over the world across dozens of computer languages. As in the pancakes example, it is statistically highly likely someone, somewhere in the world, has faced an issue similar to mine at some point, and has created a post within these forums with an attempt at an answer.

But there is always the possibility my question hasn’t been answered yet. In that case, what options am I left with? Trial and error, over and over, hoping to solve it on my own. Becoming increasingly more frustrated, while at the same time perhaps creating my own post to solicit advice, hoping someone answers in the nick of time. While this entire process is time consuming and often comes with its share of frustration, it’s also an important learning exercise. How could I have written my question differently to find my result? How does this exercise in defining the problem expand my knowledge of it, and in turn the answer, that I can then use for next time? As Ralph Waldo Emerson put it, perhaps “it’s not the destination, it’s the journey.”

But often in these situations, we’re not really hoping to learn a lesson: we’re interested in finding a viable solution as quickly as possible so we can move on to what we’re actually trying to do with the code output. So imagine yourself coming across this issue, and getting a search result to try out your code in ChatGPT. What’s the harm in trying it out? You take the error message you received in your code and throw it into the chatbot, along with some prompt: “help me resolve this error in my code.” Press enter, and a few seconds later, the AI provides you with a solution. When it works – and it often does – that means the chatbot provided you what appears to be a faster, easier, and distinctly specialized result for your exact issue. Why would you ever go back to the crowdsourcing forums after seeing this result? The chatbot is a personalized tutor available at a moment’s notice, with arguably better performance than those humans in your online forums.

Where does this leave Stack Overflow and Github, among other crowdsourcing sites? What might this phenomenon mean for Reddit (which is already facing challenges) or Facebook Groups? Indeed, these crowdsourcing spaces might become much more desolate, suffering from declining volumes of new entries and users. Turning away from such forums in favor of generative AI chatbots may mean we engage increasingly only with ourselves, interacting primarily with this tool that we might start to provide humanoid attributes to, considering it a partner in our work. While the benefits can easily be understood, how will this tool also hold us back socially, and even developmentally, if we’re less motivated to find things out by engaging with others?

While our online communities may not be in immediate danger, they may begin to look different the more we continue to rely on generative AI tools. Fundamentally, I see two major shifts that may occur, although perhaps not all at once.

First, our online communities may shift from a network of humans to pure social media – where everything new posted online is created by a machine or by someone looking to turn a profit, with no intended social purpose beyond engagement metrics linked to financial incentives. When everything becomes generative media, the content is no longer being used primarily to communicate or build community relationships. Rather, the goal is engagement. Additionally, if everything posted into online communities is created for this purpose, there will likely come a point where less of the material is helpful; these solution forums typically scraped to train AIs are now filled with AI-generated material, which creates a circular problem of training a machine on its own output.

Second, our online communities may become spaces solely for the most technically advanced individuals to circulate, looking for answers to questions the AI hasn’t caught up with, but are far beyond the reach of the average user. These communities may expand to be spaces only for “learned” elites, or most knowledgeable of individuals who flock to these corners to find their kind among the overwhelming amount of bots that will proliferate and scrape every corner of the internet. These communities then will further insulate themselves, developing secret spaces away from the eyes of internet crawlers, where the “real information” lies, potentially leading to more inequality and reinforced social hierarchies.

From these two potential shifts, we can then begin to understand how a lost sense of online community could begin to emerge. What is lost when we move away from building communities and rely on a single source? The impacts may be political, psychological, and even spiritual. Will we lose our sense of belonging and the skills required to cultivate meaningful online relationships?

Perhaps a solution lies within understanding how forums can benefit from integrating chatbots, if we provide effective guardrails to limit their engagement. These bots could help search for information in a pinch, without becoming part of the dialogue. We will need to ensure we can differentiate between human and bot, using the bot as a resource and nothing further.

Ultimately, generative AI is a helpful tool in some contexts– but nothing more than that. It’s not a collaborator, or a partner. It’s not a person with thoughts, and certainly not sentient. But in the wake of AI expansion, we need to reflect on how it might affect the online communities we’ve come to rely on, and how it may in turn change the ways in which we interact with one another.

Authors

Kalie Mayberry
Kalie M. Mayberry is a social impact researcher and educator, exploring activism practices and community governance models at Berkman Klein Center for Internet and Society at Harvard University. She has previously conducted research at Columbia Business School and Wharton School of Business, and hol...

Topics