Can Tech Promote Social Cohesion?
Tim Bernard / Apr 3, 2023Tim Bernard recently completed an MBA at Cornell Tech, focusing on tech policy and trust & safety issues. He previously led the content moderation team at Seeking Alpha, and worked in various capacities in the education sector.
There is arguably a broad consensus that social media presents a challenge to democracy and social cohesion, even if the degree and precise mechanics of that challenge are still contested. An emerging community of engineers and thinkers is also invested in the idea that the power of tech platforms to stoke division might instead be used to promote social cohesion, if the design of their systems can be re-engineered with that goal. If platforms such as Facebook and Twitter have contributed to phenomena such as polarization, the thinking goes, then perhaps they or their successors can do the opposite.
A couple of hundred people interested in exploring this hypothesis came together in San Francisco in February for the inaugural Designing Tech for Social Cohesion conference, which was the first public event by the Council on Technology and Social Cohesion. The Council is convened by a group of organizations—including Search for Common Ground, the Toda Peace Institute, Braver Angels, More in Common, and the Alliance for Peacebuilding—that work in peacebuilding (often known as bridge building in the US), together with the Center for Humane Technology, which advocates for building tech that contributes to a “humane future that supports our well-being, democratic functioning, and shared information environment.”
The Council and conference were initially inspired by the continuation of a conversation that began when Shamil Idriss, CEO of Search for Common Ground, was the guest on the Center for Humane Technology’s podcast. The discussion considered how the insights and practices of peacebuilding could inform the usage and design of social media and other technology to positively impact social cohesion. Also prominent among the stakeholders participating in the formation of the Council is Dr. Lisa Schirch of the University of Notre Dame, who directs the Toda Peace Institute's Social Media, Technology and Peacebuilding program. Her research unites expertise in technology and the impact of social media with peace studies. She wrote up the conclusions of an initial working group convened by the Council in a forthcoming Yale Journal of Law and the Humanities article, “The Case for Designing Tech for Social Cohesion: The Limits of Content Moderation and Tech Regulation,” which served as a conceptual framework for the conference.
Toxic polarization occurs when people perceive other groups as existential threats, distrust and dehumanize others with us-vs-them narratives and justify the use of violence against others.
Technology or tech refers here to digital tools, with a particular but not exclusive focus on social media.
Social cohesion refers to the glue that keeps society together; it is the opposite of toxic polarization. The United Nations defines social cohesion as “the extent of trust in government and within society and the willingness to participate collectively toward a shared vision of sustainable peace and common development goals.” Three dimensions of social cohesion include individual agency, horizontal relationships, and vertical relationships.
Bridge building and peacebuilding are types of prosocial interventions that support the goal of social cohesion in three ways.
1. Increasing individual agency;
2. Bridging relationships between groups; and
3. Building public trust between society and governing institutions.
PeaceTech refers to technology that both supports the analysis of polarization and bridge building or peacebuilding interventions to support social cohesion.
—Executive Summary of The Case for Designing Tech for Social Cohesion
The first step of a ten-part roadmap that the working group produced is to “Institutionalize the Cohesive Tech Movement,” and one of the key goals of this conference was to advance this work, bringing together “tech innovators, Trust & Safety staff, [and] practitioners with community bridge building and global peacebuilding experience.” The other stated goal of the conference was to showcase “a new generation of tech products that offer design affordances and algorithms optimized for prosocial content.”
Tech for social good and “PeaceTech” are neither new ideas, nor new movements. Readers are likely familiar with the broad category of public interest tech, and the PeaceTech community has been convened and promoted in such venues as the ICT for Peace Foundation (2003), the PeaceTech Lab (originating in 2004) the Peace Innovation Lab (2008) and Institute (2018), the BuildUp conference (2014) and, more recently, the Global PeaceTech Hub and The Gov Lab. The unique aspiration of the “cohesive tech movement” seems to be to influence tech firms that are not primarily “PeaceTech” to incorporate insights, features, or specific technologies into their product design so as to advance the goal of social cohesion at greater scale.
The conference attendees fell into three main buckets:
- Members of the peacebuilding and bridge building community, some of whom had a significant focus on using technology for their work, and others who were interested in increasing their use of technology.
- Technologists, split between academic or NGO-affiliated researchers and those working for smaller companies, with a small number of employees from larger tech companies such as Meta and Google, several of whom attended in a personal capacity.
- Other interested parties: funders, peacebuilders who don’t actively use technology, authors, and other NGO or quasi-governmental organization representatives.
The role of social media in social cohesion was a focus highlighted in introductory remarks by Schirch and the Center for Humane Technology’s Tristan Harris (it is also a focus of the report materials on the Council’s website). Harris appeared to lay the blame for toxic polarization squarely at the feet of big tech social media platforms, whereas Schirch described social media as significantly “amplifying” toxic polarization. Despite varying opinions among speakers, there was a general consensus that social media plays some role in toxic polarization, with some characterizing it as exacerbating the problem rather than being at its source. However, there was little discussion on the extent of social media's causal responsibility, and little reflection on the inconclusive results of studies on its impact on social cohesion, leaving room for further exploration and analysis.
One panel did have two speakers who focused on possible solutions to polarization caused by social media. Ravi Iyer of USC and the Psychology of Technology Institute, a former Meta employee, noted that we do not need to know what portion of the blame should be attributed to social media platforms in order to be obliged to fix problems that relate to it. He suggested some specific approaches to algorithmic transparency and regulation, and as well as other design approaches that he believes could decrease toxic polarization and enhance social cohesion (these are all covered in detail in his newsletter.) Harvard’s Aviv Ovadya gave a technical overview of how bridging algorithms—which promote content that is approved of by a critical mass of people on each side of a political divide—work and why they could help social media become a force for social cohesion.
In a side conversation following the panel, I asked Iyer if bridging algorithms could silence minorities. My concern was that views that may be considered extreme or combative may yet be just, and that systems designed to help find common ground may penalize speech or speakers in a bid to find balance (the Civil Rights movement was polarizing at the time, as are any number of other bids for social or environmental justice). He explained that algorithms can be calibrated in any number of ways: to look for greater or smaller numbers of people on each side of a divide, or to select for different kinds of divide, not only right-left, but also young-old or on any other characteristic. Iyer mentioned that no ranking algorithm (including typical engagement-based ranking) can be truly “neutral” in its outcomes. I suspect that platform executives who rely on the revealed preferences of their users may be unwilling to make more actively value-reliant decisions about what kinds of speech get promoted.
In another panel, Colin Rule mentioned that platforms’ use of their systems to promote conflict resolution and social cohesion might be called “benevolent manipulation,” which got a laugh in the room. Outside of Schirch’s introductory remarks, where she referenced BJ Fogg and made a similar point to Iyer that design is never neutral and always embeds some values, there was no critical discussion of whether deliberate, nonconsensual social intervention via tech platforms is ethical or appropriate, or what safeguards need to be in place before such an approach should be adopted. The inclusion of more tech industry figures may have encouraged raising this crucial discussion topic.
The moderator of the aforementioned panel with Iyer and Ovadya was Berkeley’s Jonathan Stray, a researcher in recommender systems and their impact on society, who also writes Better Conflict Bulletin, a newsletter on (roughly) bridge building in the US. Though this may not have been well known at the time of the conference, Stray and his research partners have begun a project working with Meta to conduct on-platform studies. Stray and his collaborator, Gillian Hadfield, note that there is “serious distrust between the researchers studying how to improve recommender systems and the platforms that operate them,” but that conducting live experiments on the social media platforms themselves is essential in order to get dependable results.
Step nine of the Council’s roadmap is to explore regulatory incentives for tech platforms to promote social cohesion, and several somewhat vague proposals are in the report and were mentioned during the closing panel. However, another group whose presence was missing was those who have expertise in tech law and regulation. Some of the suggestions, such as taxing polarizing outcomes or mandating use of bridging algorithms seemed implausible, but went unchallenged. The only moment of caution was when Deepanjalie Abeywardana of Sri Lanka-based Verité Research noted that authoritarian regimes have cited the social media regulations enacted in liberal democracies as justification for creating their own draconian laws. Might such figures be quite pleased to “tax” or prohibit what they regard as “polarizing” dissent?
Rhetoric assigning unequivocal blame for polarization on the platforms and pushing aggressive regulation seems likely to discourage greater participation by members of larger tech companies. Stray and Hadfield note, with respect to research:
“The problem with instinctive distrust of platforms is not that platforms are above criticism, but that blanket distrust blocks some of the most valuable work that can be done to make these systems less harmful, more beneficial, and more open.”
More broadly, if platform representatives are to be part of a movement for social cohesion within the tech industry, then they must be seen as partners rather than opponents, according to this logic. One participant observed that, at least within the US, the peacebuilding community and the social justice community tend to be very distinct. I’d suggest that activists like Harris may be part of the latter camp, at least in temperament. Although this may not have been the synergy that the organizers had in mind, perhaps the experienced peacebuilders at the conference can help build bridges between critics of social media and those who operate the social media platforms themselves.
Aside from the absence of some important stakeholders, one might ask whether there is the potential for a real “cohesive tech movement.” Many of the examples of PeaceTech and perspectives of their creators presented at the conference are not clearly compatible with the mission of transforming social media to reverse phenomena such as toxic polarization or to increase social cohesion. Waidehi Gokhale, CEO of Soliya, a peacebuilding organization that operates using its own online platform, expressed a commitment to human facilitation and slow growth; Arik Segal of Conntix explained how online platforms can be used specifically as a complement to in-person dialogue; BuildUp’s Phoenix is a tool for analyzing social media to surface insights for “traditional” peacebuilding projects. Can PeaceTech inform the design of existing and future tech platforms to improve social media? Perhaps—and that is no doubt a praiseworthy goal—but it does not necessarily make a movement; ironically, there did not appear to be a cohesive vision shared by most participants.
Perhaps more closely aligned with the specific goals of the Council, and unlike typical PeaceTech efforts, two projects presented at the conference were actively incorporating the insights of peacebuilding into for-profit platforms that are not aimed at the peacebuilding movement: Gatheround and Slow Talk. Both have wider aspirations but, indicative of the challenges in scaling and mainstreaming peacebuilding, are currently only available to corporations for HR-organized internal conversations. Serious dialogue across conflict boundaries is difficult, and it is unclear how many people will engage in this work in their leisure time, especially without all the personnel-heavy resourcing of typical peacebuilding efforts. Further, participation in grassroots peacebuilding tends to be self-selecting, not imposed by a social platform that is being used for other purposes, and it remains to be seen what impact might transfer from one context to the other.
Lucas Welch, the founder of Slow Talk, raised the issue of business models for platforms, and the economic incentives that they bring with them. In particular, the dominant advertising-based model means that platforms are engaged in a fight for immediate user attention in order to sell ad impressions, and polarizing content is attention-grabbing. (In their review of the evidence on social media and social cohesion, Sandra González-Bailón and Yphtach Lelkes identify the attention economy as a root cause of the phenomenon.) Despite a conference panel discussing for-profit, non-profit and hybrid funding opportunities for PeaceTech, there were no clear alternative funding models suggested for either existing or new for-profit social platforms.
During her opening presentation Schirch declared that “[t]he road to Hell is paved with code, and if we want to pave a road out of Hell ... we need to think about how we pave a different road with different code.” However, it is not yet evident how much responsibility tech bears for current levels of toxic polarization, nor how effective it can be at reversing this trend. Furthermore, in the current political climate, with social media firms under fire from Republicans for supposed political interference, it is hard to imagine Meta, Twitter or TikTok executives defending practices intended to manipulate political discourse at scale, even if it is prosocial. Relying on “we just promote the revealed preferences of our users” and “we want to entertain our users” may seem preferable to explaining why their platform deems this form of political discourse as good, and that other one as bad.
Even if the chances of transforming major social media platforms into engines for social cohesion are minimal, there may still be much that the peacebuilding and tech worlds can learn from each other, and the presence of a regular convening of these two groups can certainly be of incremental benefit to society.