Home

Donate
Analysis

Can Social Media Bring Us Together? Experts Say Yes

Prithvi Iyer / Apr 10, 2025

Jamillah Knowles & Reset.Tech Australia / Better Images of AI / Social media content / CC-BY 4.0

Social media platforms have redefined communication by shifting the role of content curation from traditional editorial gatekeepers to largely algorithmic systems that reward content based on engagement metrics. Scholars have argued that algorithms optimized for engagement inadvertently reward content that elicits strong emotions, exacerbating polarization and, in some instances, amplifying disinformation. Platforms have resorted to top-down content moderation and fact-checking initiatives to deal with the issue. While these efforts have had some positive results, they have also been criticized for impinging on free speech and alienating Global Majority countries where content moderation infrastructure is grossly insufficient.

This begs the question: can social media platforms be designed in ways that incentivize social cohesion? A new research paper from E. Glen Weyl, Luke Thorburn, Emillie de Keulenaar, Jacob Mchangama, Divya Siddharth, and Audrey Tang provides insights into this by proposing design features based on plurality and cohesion instead of engagement. Through this research, the authors engage with a growing community of technologists and researchers interested in “translating social cohesion into platform design.” Below, I distill their findings and describe the alternative social media model presented in the paper and how it could work in practice.

Bridging and balancing

The authors refer to online users in their model as “citizens” who belong to multiple communities, defined by explicit affiliation or shared attitudes.” To promote common understanding and social cohesion, the authors propose adding social context by “annotating posts with the communities among which they are widely accepted or divisive.” To address online polarization, the authors propose an alternative to recommendation algorithms optimized for engagement, with one that ranks content based on what “relevant communities have in common,” along with ensuring that “all relevant communities receive a fair share of attention.”

The authors refer to these goals as “bridging” and “balancing.” Unlike traditional social media platforms focused on a single user who sees content based on “social communities they are inferred to belong to,” this system ensures users are aware of the “community of others that are seeing and assenting to that content.” This proposed design change prevents what is called the “false consensus effect,” wherein what users perceive as viral content merely reflects the views of a narrow subset of users. But what if misinformation serves a bridging function?

In written remarks to Tech Policy Press, researcher and co-author of the paper, Luke Thorburn, said:

Such content might be bridging for some communities to which users belong, but we all are members of many different communities, which represent many different value sets and interests. So in our proposal, such content would be represented but would also need to 'compete' with content that bridges other communities the user belongs, and the UI would make it clear for which communities a given item of content is bridging or contested.

Model specification and practical considerations

A central component of social media is how online users interact with content. In this proposal, the authors make the case for “displaying social provenance and attending to critical responses.” So, for bridging content, there should be a note on the post suggesting that this content is widely accepted in their community. In contrast, for balancing content, the user would be notified that the discourse is divisive, allowing them to click to a new window that shows both sides of the debate. The authors propose using an LLM to summarize the main arguments made by a community about a particular topic and how it's different for other communities.

In practice, content might serve different purposes based on a user’s community affiliation. For example, a user identifying as an American Democrat might see content that serves a bridging function within the Democrat community but is divisive with Americans overall. In this case, the content that bridges divides across the user’s communities will be labeled as “green.” For cases where it is divisive within the community (i.e., viewed positively by some members and negatively by others), the content will have a “conflict indicator,” encouraging the user to explore the other side of the debate. These specifications serve as templates, and how this would look in practice is up to the discretion of platforms.

The goal here is to facilitate common understanding amongst social media users by explicitly labeling content to show where it originated from and the extent to which people within and across communities believe it. Thus, in this system, if a user sees content as bridging within a community, they can be sure that others in the community are seeing and assenting to this content, while posts appearing contentious are similarly portrayed to other group members with an option to explore contrasting perspectives.

The authors recommend that platforms provide certain opt-in features where users can connect with other group members. The “community” aspect is unique to this system, and defining a “community” is complex, as it involves subjective beliefs that are hard to measure directly. But in this case, the authors propose relying on various data points—such as social connections, explicit group affiliations, expressed attitudes, and behavioral patterns—to approximate communities. Once a community is detected, the next step is to identify a representative, which can be done by linking communities to established organizations or identifiable leaders. Still, some online communities—such as online protest movements—may initially lack formal representation. In such cases, platforms can register these communities until a leader steps forward. Governance challenges may arise, but the authors argue that allowing community members to vote for leadership changes could help maintain legitimacy.

Economic considerations

A practical solution to improving social media must also provide a viable business model that could incentivize the platform to implement these changes. To this end, the authors propose a system where interactions between users and online content trigger micropayments based on the “pay per impression” model, wherein money would flow from sponsors (i.e., advertisers and formal communities) to creators. The assumption here is that communities in this system would be willing to pay subscription fees to see content that brings members closer together. For advertisers, the traditional business model might not work.

As the authors note, “'pay-per-click' and 'pay-per-conversion' models will typically misfit an approach to advertising focused less on direct inducement of actions.” Instead, an advertising revenue model that quantifies joint impressions based on social relationships and creates consensus would be more viable. So why would advertisers find value in such a business model? “Many consumption decisions are collective in nature, so advertisers value being associated with shared experiences and cultural moments (think Super Bowl ads), and would be able to choose to advertise next to such content,” said Thorburn.

Discussion and Limitations

This paper importantly lays out the benefits of this system compared to the traditional engagement-based ranking system.

  • Depolarization: This system is geared toward providing “common knowledge as primary output,” which is shown to reduce toxic polarization. Moreover, by creating design pathways for users to find common ground with others and by allowing communities to pay directly for content that bridges divides, this system makes depolarization a product and incentivizes advertisers to this end.
  • Connection and Community: The proposed system is designed to integrate multiple communities into a unified, feed-like interface that highlights shared values, beliefs, and mutual awareness. By fostering deeper conversations, this system aims to cultivate a stronger sense of connection while maintaining diverse interactions across different groups.
  • Social Dynamism: The internet was envisioned as a tool for connecting and organizing social groups, but many emerging communities still struggle with coordination. While traditional social media has enabled marginalized groups to find one another, it has not always empowered them to build formal structures. The proposed system aims to address this by providing both the organizational tools and financial incentives for platforms to support community development.

Limitations

While this reimagining of social media is ambitious and has some apparent positives, it also has a few limitations worth considering.

  • A revenue model where communities pay for content that bridges divides may work for communities with sufficient organizational and financial resources; however, it may be an obstacle for informal groups with financial constraints. Also, since this model has not been tested in the real world, it is difficult to ascertain whether it will yield sufficient value to be commercially viable.
  • This proposal also requires users to be active, whether by opting to learn more about divisive content or by enrolling in and monitoring the communities they are involved in. This may be hard to achieve because, often, only a small subset of users engage with “effortful affordances.”
  • Lastly, like with any socio-technical system, there are inherent tradeoffs in this system. If more resources and attention are allocated towards promoting content that bridges divides, it might reduce exposure to contrarian perspectives and vice versa.

Despite these limitations and a lack of real-world testing, this paper provides an ambitious and paradigm-shifting proposal that reimagines how social media operates and the goals it serves. This research is a work in progress and open to public deliberation. There are lots of details still left to figure out, especially regarding how this design could be implemented on open protocols.

“But ultimately,” Thorburn said, “we will learn the most from working through these open questions in the context of real platforms.” Other stakeholders across industry and civil society may have different approaches to the one shared in this paper, but that is the “nature of techno-political deliberation, and ultimately a form of policymaking in its own right.” Importantly, according to Thorburn, some social media platforms have expressed interest in this design approach, but questions remain about whether this approach is viable at scale.

Authors

Prithvi Iyer
Prithvi Iyer is a Program Manager at Tech Policy Press. He completed a masters of Global Affairs from the University of Notre Dame where he also served as Assistant Director of the Peacetech and Polarization Lab. Prior to his graduate studies, he worked as a research assistant for the Observer Resea...

Related

Information Integrity by Design: The Missing Piece of Values-Aligned Tech

Topics