Meta’s New Default Limits on “Political Content” Weaken Free Expression Online
Leanna Garfield / Apr 22, 2024Meta’s Instagram and Threads are home to all kinds of important content — ranging from thoughtful explainers on consequential legislation to political news and analysis to basic necessary information about voting and elections. Therefore, it was surprising when Meta announced in February that it would make far-reaching changes to the two apps impacting millions of users. In recent weeks, the company quietly opted all users into a default setting to algorithmically limit the reach of “political content” from accounts users don’t follow. Meta explains: “These recommendations updates apply to public accounts and in places where we recommend content such as Explore, Reels, In-Feed Recommendations and Suggested Users.” The setting will also roll out to Facebook at a later date.
In response, on behalf of 200+ creators, Accountable Tech and GLAAD sent an open letter to Meta speaking out against the changes, which have several implications for free expression, especially during an election year in the US and globally. As the letter explains, creators have several concerns — firstly, with Meta’s definition of “political content,” which the company defines as “likely to mention governments, elections, or social topics that affect a group of people and/or society at large.” That definition spans anything remotely “political,” making it inherently biased against so-called “social topics” including climate change, gun violence prevention, racial justice, LGBTQ and disability rights, and reproductive freedom just to name a few. The letter states: “Meta’s vague definition of political content … endangers the reach of individuals and organizations whose identities and/or advocacy have been rendered a ‘social topic’ in this country.” By virtue of its definition, Meta itself is politicizing parts of people’s identities, when their identities are just one part of who they are.
Secondly, Meta states that it will now not recommend such content (i.e. it is excluding such content from algorithmic recommendation, which is essentially suppression and “shadowbanning”). The letter urges the company to reverse how it implemented the setting — Meta forcibly opted users into the new default setting (of being opted out of seeing “political content”). The company should instead adjust the setting to give users the choice to opt-in (currently users must proactively find and change their settings — since Meta did not message users directly about the change, most users are likely not aware that they are now not seeing such content).
News and human rights content creators tell The Washington Post that Meta’s decision is already affecting the reach of their content. “This hurts people’s access to information and their ability to find accurate information,” said Ky Polanco, the co-founder of @Feminist, an Instagram news page that covers women’s and abortion rights. Polanco, for example, saw the reach of the account decline from 10 million users to 800,000. Like @Feminist, the goal of “political” accounts is to raise awareness and reach new audiences. While Meta may deem it “political,” this is vital content aimed at public education and civic engagement, which the new setting now limits by default.
And third, Meta made this decision categorically, with little transparency or input from civil society or creators. Many users are not even aware of this new on-by-default setting, which is relatively hidden inside the apps’ “content preferences” settings. (Users can learn how to opt-out here.) While on the outset, the setting may seem like it gives users more control over their feeds, in practice, it likely gives them less.
As the letter also highlights, Meta is not being transparent about why it made the changes. Like TikTok, algorithmic content recommendations (from accounts users don’t already follow) are a big part of how Instagram and Threads serve content to users. Yet Meta downplays how much the setting affects creators and user feeds: “If you decide to follow accounts that post political content, we don’t want to get between you and their posts, but we also don’t want to proactively recommend political content from accounts you don’t follow.”
It is especially concerning that Meta is characterizing "social topics that affect a group of people and/or society at large" as "political content" and then limiting the reach of such content. Equally concerning is the lack of transparency on the implementation of such suppression, since "social topics that affect groups of people" would include historically marginalized groups (e.g. LGBTQ people, people of color, people of different faiths, people with disabilities, women, and more). As GLAAD conveyed to The Washington Post, “Categorizing ‘social topics that affect a group of people and/or society large’ as ‘political content’ is an appalling move. LGBTQ people’s lives are simply that, our lives. Our lives are not ‘political content’ or political fodder. This is a dangerous move that not only suppresses LGBTQ voices, but decimates opportunities for LGBTQ people to connect with each other, and allies, as our content will be excluded from the algorithm.”
A larger implication of such a major consequential change is that it illuminates how Meta, and many other online platforms we rely on, wield extraordinary unchecked power over our entire information ecosystem. The circulation of our most basic news and civic information now reaches us predominantly via these channels that are controlled by companies that can make changes like this at will – deciding, without transparency or accountability, what will or will not reach us. That is indeed a topic that affects “society at large.”