Home

Give Group Admins Tools to Fight Disinformation In Immigrant Diaspora WhatsApp Groups

Michael Rain / Jun 9, 2022

Michael Rain is the founder and CEO of ENODI, a media and research company focused on people with immigrant backgrounds.

Mary Boateng wipes her face and sips a glass of ice-chilled water before joining her prayer-line call. The 56-year-old woman, a Ghanaian immigrant, spent twenty minutes inhaling steam from a boiling pot of orange and lemon peels. She followed the instructions from a video that advised people to hold their heads and breathe over a pot of citrus peel steam. According to the video, the steam gets to the back of your nasal system, “to kill the coronavirus before it even gets to your lungs.”

Mary saw the video because a fellow congregant shared it in a WhatsApp group connected to her church. When she joined her church’s daily conference call, she encouraged all members to try the routine, saying, “I feel it working, but I am still praying to God to keep me safe.”

A church member questioned the legitimacy of the “steam routine” on the call, but he couldn’t do anything about the impact on the video in the WhatsApp group. Neither could the group admin.

This anecdote, related to me by a family member, is not unique. Indeed, WhatsApp group admins lack the controls to moderate conversations and content shared in the groups they create. To address mis- and disinformation, the company needs to introduce group admin tools to enable more user-focused content moderation.

WhatsApp In Immigrant Communities

In the absence of well-funded media that directly serves them, immigrant communities pull together communication channels and kinship through informal networks. There are millions of immigrants and their children in the U.S. who rely on digital spaces like WhatsApp groups to maintain cultural connections with their communities of heritage. Church-affiliated WhatsApp groups are a common example. There are also groups formed by region and around topics like business and entrepreneurship. In these spaces, users will share news and information they believe is helpful, but are often unwittingly sharing misinformation and disinformation.

The WhatsApp platform is particularly influential in the U.S. immigrant population as it provides a lifeline to home countries by providing free text and voice communication with family members thousands of miles away. But it also means immigrant diaspora groups encounter misinformation and disinformation content on social media that is specific to their community context. For instance, during the 2020 election cycle Latino communities were targeted with content comparing Joe Biden to Latin American dictators or asserting that Black Lives Matter activists practiced “brujería” (witchcraft).

Research has shown an almost four-fold increase in user reliance on social media platforms like WhatsApp for news and information since the start of the global pandemic. But COVID-related misinformation and disinformation that thrives in immigrant diaspora communities have impacted people’s perception of vaccines and health safety protocols. We are seeing lower vaccination rates and higher skepticism in immigrant and ethnic groups, including Latino males and young Black New Yorkers, many of which are Black immigrants who have been led to believe that COVID-19 vaccines were created to reduce the Black population. This assertion has spread widely, for instance, throughout New York City’s Haitian neighborhoods.

Group Moderation Tools Should Be a Priority

Social media platforms take a variety of approaches to address COVID-related misinformation and disinformation. Twitter, YouTube, and Facebook remove and label content that violates their policies. Facebook and Instagram additionally make it more difficult for people to leverage ads to spread disturbing news. These platforms, both owned by parents company Meta, have also built a partnership with the International Fact-Checking Network (IFCN) to provide third-party fact-checking services on their platforms.

While it is owned by the same corporate parent, things work a bit differently on WhatsApp since it is an end-to-end encrypted messaging app. The encryption provides a level of security that is vital to millions of users who want to protect their conversations from the surveillance of governments, private firms, hackers, and others. Because the platform is closed, top-down labeling and content removal is not an option. Instead, the company has run user education campaigns, introduced fact-checking bots, limited group size, and set forwarding limits to messages to slow the spread of viral information. Content moderation on WhatsApp requires a user-focused approach.

Despite the initiatives Meta has launched, including those specific to WhatsApp, the company continues to draw criticism from advocacy groups for not introducing solutions that address issues that disproportionately affect minority groups, including immigrant groups. Leaked internal company documents reveal an acknowledgment that weaker moderation for non-English speakers leaves the platform vulnerable to abuse by bad actors.

To combat misinformation and disinformation in immigrant diaspora WhatsApp groups, WhatsApp should implement group admin tools that allow content moderation and group member management. Group admin tools that currently exist in Facebook groups, which help moderators with content moderation and member management, can be adapted and incorporated into WhatsApp groups. These features include keyword alerts and the ability to remove questionable content. Group admins could also be given the ability to attach links to verified information to counter false commentary and other posts.

Additionally, the company should launch a group admin support service that helps moderators navigate user resources, report issues, and receive timely responses to inquiries. This support exists for Facebook group moderators. The service would help users become knowledgeable about new features and informed about current useful WhatsApp tools like fact-checking bots. Admin Support would also field inquiries and respond to admins within a reasonable timeframe.

These tools can give admins the ability to address misinformation and disinformation content shared by group members. Research suggests that users would welcome this capability. In a study of Brazilian WhatsApp users, participants expressed a medium to high willingness to actively respond to COVID-19 misinformation shared by their peers on the platform.

Fact Checkers Can Do More With WhatsApp

The IFCN has a role to play in this as well. The organization currently offers resources and training to over 100 fact-checking organizations globally. It also confers certifications to publications that wish to be recognized as legitimate fact-checking sources. The IFCN works with Meta to connect them with third-party fact-checkers who do the work of verifying or debunking content shared on the platform. On WhatsApp, certified publications can be set up as bots for users to chat with and share links, videos, and memes for review. The issue is there are few fact-checking services that offer immigrant diaspora groups a culturally relevant perspective that is applicable in the country they currently reside.

The options for users like Ms. Boateng, in the steam peel example, are either a U.S.-based publication that lacks her Ghanaian cultural context, or a Ghana-based service that will not speak to her American realities. Last year, Univision addressed this issue for the Hispanic community in the U.S. when its El Detector became an IFCN-certified fact-checker. The IFCN needs to invest in a focused campaign to recruit more U.S.-based publications that serve immigrant communities so that WhatsApp users will be more likely to utilize a fact-checking service that is culturally relevant to them.

It’s important to emphasize that this approach will not solve each and every issue related to misinformation and disinformation on the platform. The goal here is to reduce it. We must also not limit our perspective to bad actors who do create and populate WhatsApp groups to knowingly spread false information. This solution looks to enable communities of people in these groups to have a greater capacity to challenge information shared in these spaces. I would argue that it is just as important to thwart the spread of false news in authentically created groups because members hold a much higher level of trust for fellow members, and thus are more likely to give credence to the content shared in these spaces.

Mitigating Harms, Protecting Encryption

Meta appears to be moving in the right direction on this issue. In April, WhatsApp announced several forthcoming features to chat groups on the platform. Included in the upgrades are group announcement tools and functionality that enable moderators to delete messages shared in the group they administer. These are good starts, but the company needs to expand the suite of features further.

Some may argue that the misinformation and disinformation problem is worth piercing the veil of end-to-encryption so that top-down approaches can be enforced on WhatsApp. This is a limited view of the importance of the platform and this privacy feature. Many vulnerable communities and activists who challenge power globally rely on the privacy of encryption to communicate safely. Any remedies to the fake message problem must also protect users' right to message privacy.

Authors

Michael Rain
Michael Rain leverages storytelling and technology to expand the world’s perception of diverse communities. He is the founder and CEO of ENODI, a media and research company focused on people with immigrant backgrounds. Previously he was a Knight Journalism Fellow at Stanford University, a Tech Polic...

Topics