Beyond Deplatforming: The Next Evolution of Social Media May Make Banning Individual Accounts Less Necessary
Richard Reisman / Jun 13, 2021Since his accounts on major platforms were suspended following the violent insurrection at the US Capitol on January 6, Donald Trump has been less of a presence on social media. But a recent New York Times analysis finds that while Trump “lost direct access to his most powerful megaphones,” his statements can still achieve vast reach on Facebook, Instagram and Twitter. The Times found that “11 of his 89 statements after the ban attracted as many likes or shares as the median post before the ban, if not more. How does that happen? …after the ban, other popular social media accounts often picked up his messages and posted them themselves.”
Understanding how that happens sheds light on the growing controversy over whether “deplatforming” is effective in moderating extremism, or just temporarily drives it out of view, to intensify and potentially cause even more harm. It also illuminates the more fundamental question: is there a better way to leverage how social networks work to manage harmful speech in a way that is less draconian and more supportive of free expression? Should we really continue down this road toward “platform law” -- restraints on speech applied by private companies (even if under “oversight” by others) -- when it is inevitably “both overbroad and underinclusive” -- especially as these companies provide increasingly essential services.
Considering how these networks work reveals that the common “megaphone” analogy that underlies rhetoric around deplatforming is misleading. Social media do not primarily enable a single speaker to achieve mass reach, as broadcast media do. Rather, reach grows as messages propagate through social networks, with information spreading person to person, account to account, morelike rumors. Trump’s accounts are anomalous, given his many tens of millions of direct followers, so his personal network does give him something of a megaphone. But the Times article shows that, even for him, much of his reach is by indirect propagation -- dependent on likes and shares by others. It is striking that even after being banned, comments he made elsewhere were often posted by his supporters (or journalists, and indeed his opponents), and then liked and further shared by other users hundreds of thousands of times.
The lesson is that we need to think of social networks as networks and manage them that way. Banning a speaker from the network does not fully stop the flow of harmful messages, because they come from many users and are reinforced by other users as they flow through the network. The Times report explains that Trump’s lies about the election were reduced far more substantially than his other messages not simply because Trump was banned, but because messages from anyone promoting false election fraud claims are now specifically moderated by the platforms. That approach can work to a degree, for specific predefined categories of message, but it is not readily applied more generally. There are technical and operational challenges in executing such moderation at scale, and the same concerns about “platform law” apply.
Social media networks should evolve to apply more nuanced intervention at the network level. There is growing recognition of the need to enable a deeper level of individual control on how messages are filtered into each user’s newsfeed, and whether harmful speakers and messages are downranked based on feedback from the crowd to reduce propagation. Such controls would offer a flexible, scalable, and adaptive cognitive immune system to limit harmful viral cascades. That can limit not only how messages propagate, but how harmful users and groups are recommended to other users -- and can moderate which speech is impressed upon users without requiring a binary shutdown of expression.
Some experts propose that the best way to manage this at scale is to spin out the choice of filtering rules that work with the platforms to an open market of filtering services that users can choose from. The decentralization of this key aspect of current social media networks away from the dominant platforms, and the potential diversity of choices it may create for users, might prevent a speaker widely recognized to speak lies and hate from gaining many tens of millions of followers in the first place -- and would break up the harmful feedback loops that reinforce the propagation of their dangerous messages. Perhaps such a system could have substantially prevented or reduced the propagation of the Big Lie, and therefore abrogated the necessity of deplatforming a President. Instead, it would apply more nuanced downstream control -- a form of crowdsourced moderation emergent from the individual choices of users and communities of users.
Under the status quo, we are left with the“platform law” policies set by a few dominant private companies, leaving no one satisfied. Instead, democracy would be far better served by digitally enhanced processes to apply more nuanced forms of “community law,” as crowdsourced from each social network user and community as they interact with their networks.
This piece is also at Reisman’s blog, Smartly Intertwingled.