Home

Donate

Old Meets New in Online Influence

Josh Goldstein / Dec 17, 2024

Earlier this month, Meta published a threat report detailing five networks of fake accounts discovered on its platforms—run from Moldova, Iran, Lebanon, and two from India—that were attempting to manipulate public debate. These threat reports, which Meta releases quarterly, emphasize that this “coordinated inauthentic behavior,” known as “influence operations” by much of the field, violates platform policies because of their behavior (running fake accounts that pretend to be people they are not), not their content (what specifically they post about).

Each quarter, reading Meta’s threat reports feels both new and familiar. They feel new when they make attributions to actors who have not been caught running fake accounts before or when they show familiar networks jumping onto new topics of the day. In the Q3 report, for instance, in an update on the well-known Russia-origin operation known as Doppelganger, Meta shared that it “found links between some of Doppelganger’s activities and individuals associated with MGIMO (Moscow State Institute of International Relations).” This is a breadcrumb for researchers to follow.

But the reports also feel familiar when the tactics, techniques, and procedures, along with the dynamics between platforms and perpetrators, are largely consistent across time and place. Since these reports became formalized and routine in 2018, they have frequently described networks—no matter where in the world they originate—exhibiting a similar set of features. This quarter’s threat report is no different: The network originating in India amplifying posts from real politicians; the operation stemming from Iran running fake news outlets; the campaign from Moldova using AI-generated profile pictures (a more recent staple).

For those interested in learning more about covert influence operations on social media platforms, here are four lessons—old lessons, with new examples—that the Meta Q3 report brings back to the surface:

1. Virality and operational security can come at odds.

Propagandists often face two competing goals: On the one hand, operations targeting mass audiences want virality or at least pick-up among their target communities. If the theory of change is that people are more inclined to adopt the operator’s favored positions if they see them, then the operation needs the content to spread. If the goal is to distract, reach helps, too. On the other hand, propagandists don’t want to get caught. If they do, platforms with active moderation teams will remove the fake accounts, and the operation will be forced to begin anew (and build up new friends or followers).

The Meta Q3 report provides examples of how virality and operational security can come into tension. Detailing one of the networks from India, Meta describes that the propagandists used real accounts to befriend users, then restricted account visibility to “friends only.” This was likely an attempt to evade detection, but if only “friended” accounts can view their content, the operation limits its spread. In the case of the network from Iran, Meta notes that the fake accounts linked to websites that were only accessible via Tor browser. These types of steps taken to reduce the chances of getting caught can cap the potential effectiveness of covert influence efforts.

2. Experimentation in adversarial spaces is ongoing—and mundane.

Online influence operations, like many adversarial spaces, operate as a cat-and-mouse game. The attackers (the propagandists) and the defenders (in this case, the trust and safety teams) react to each other’s behaviors.

Studying tit-for-tat adaptation can often be tricky for independent researchers since we have limited information about how companies find these networks, what automated filters do (and don’t) flag, and the decisions that internal teams make. We can read reports about the accounts that were removed and sometimes access limited data about them, but we don’t know how the campaign was discovered by the platform. In some areas of social media research, researchers try to audit algorithms—by obtaining data donations or creating dummy accounts to see what algorithms show or what content gets flagged—but platforms (including Facebook) often ban that behavior.

The Meta Q3 report provided several anecdotes showing how the Russian Doppelganger campaign tested the platform’s automated filters. After one Doppelganger ad was blocked, the propagandists tried iterating with different versions of the same ad while increasing the width of a strikethrough line over the text. In another case, they tried taking out ads with different caption/image combinations, varying between politics and croissants, to see what would bypass the filter. These examples show how granular—and mundane—adaptation and testing can be in practice.

3. Unequal moderation creates arbitrage opportunities for propagandists.

Since platforms are not typically required by law to remove political influence operations, how they go about takedowns is largely at their discretion. This leaves room for companies to vary in how (and whether) they approach the task—including how much they invest in internal and contracted investigative teams, what types of automated detection systems they build, and how much information they share with the public.

Propagandists, like spammers or stock pickers, are opportunists. In a forthcoming paper with my colleague Renee DiResta, we describe a regulatory arbitrage dynamic in content moderation. If certain platforms crack down on networks of fake accounts, the propagandists running them will be incentivized to build up accounts on platforms where they are less inclined to get caught—or at least to forego some of their operational security measures there that might hinder virality. Combatting influence operations therefore requires an ecosystem—not a one off—approach.

In the Q3 report’s Doppelganger update, the company notes: “Many of the adversarial shifts that appear primarily on our platforms do not show up elsewhere on the internet where operators continue using some of their older known tactics.” If granted data access, researchers could build on this by more systematically comparing Doppelganger activities by platform to hone in on how different the operational activities are. This would be particularly interesting to study between platforms with more established detection filters (like Meta) and newer or federated platforms building them in real-time (like BlueSky).

4. The line between the “overt” and “covert” can be blurry.

Research on covert propaganda often focuses on coordinated inauthentic behavior, where, by definition, accounts claim to be people they are not. The operators of these networks are not transparent about their identity. In theory, it's pretty simple.

In practice, distinguishing an overt propagandist from a covert one can be challenging. Propagandists, especially those aligned with or backed by states, often have multiple tools at their disposal—including both overt and covert media. We saw a similar pattern earlier this year when the US government announced that RT (Russia Today) is “engaged in information operations, covert influence, and military procurement.” People associated with an overt media outlet running covert accounts.

The Meta Q3 report’s discussion of the network from Lebanon notes that the company “found links to individuals in Lebanon, including some with links to two media entities: Al Mayadeen in Lebanon and LuaLua TV registered in the UK.” While vague attribution language obscures whether the links are to low-level employees or the CEO, for instance, Meta has reputational incentives to get these right, once again highlighting the blurriness between overt and covert media.

Looking forward, don’t shrink reporting—Expand it.

I’ve argued that Meta’s quarterly threat reports often surface the same lessons. If that’s the case, one might question the utility of influence operations reporting. Why invest resources in creating adversarial threat reports if they show the same core features when security teams have a range of threats to deal with?

The notion that companies should stop reporting because these lessons are well established would allow the success of these reports to lead to the demise of the reporting practice. That would be a mistake.

Since companies have back-end data that independent researchers cannot collect, these reports make some of the strongest attributions in the field. This is important for public transparency: people should know if politicians, marketing firms, or other entities are deceptively engaging in public debate with fake accounts. The reports may also have a deterrent effect for those considering running fake accounts but think that getting caught would hurt their reputation. Finally, the reports benefit the research ecosystem since they provide new leads for third-party investigators and give insights into the latest adversarial adaptations, which quantitative researchers can use for updating their detection models.

Instead of shrinking disclosures, Meta could expand its reporting in meaningful—but relatively low-cost—ways. For one, adversarial threat reports provide case summaries for political influence operations, but they typically don’t go into detail for other types of coordinated adversarial activity, like fake accounts used for spam, scams, or stock pumping, where the company reports aggregate numbers. Craig Silverman and Priyanjana Bengani, for example, reported that Meta earned more than $25 million from deceptive ads related to the US election and other social issues across more than 300 Pages, most of which were run by lead generation companies. Meta could include a section of the quarterly report that provides cases for these abuses to give more granular insight into the actors behind them, even if these threats are financially, not politically, motivated.

Meta could also expand its data sharing by piloting new institutional arrangements. The company used to routinely share information with research partners, including the Atlantic Council’s DFRLab, Graphika, and the Stanford Internet Observatory (my previous employer) about networks before they were taken down; this allowed researchers to glean insight into the behaviors of accounts which are more difficult to ascertain once they have been removed and archived. Judging from the number of takedown reports recently published by external researchers, that practice seems to have slowed. In addition to sharing takedown data with researchers, Meta could try out different arrangements, too, perhaps mimicking the structure of PhD internships in computer science by bringing in external researchers onto the team for well-scoped, time-bound projects that would then be published openly.

Meta’s reports have built a set of cases establishing lessons for the influence operations field, and the quarterly updates give new attributions, expose different networks, and document tactical adaptation. Now is the time to learn from the influence operators and try experimenting with new reporting practices.

Related Reading

  1. Researchers Validate the Dangers of Disinformation
  2. New Studies Shed Light on Misinformation, News Consumption, and Content Moderation
  3. Documenting the Assault on Disinformation and Hate Speech Research

Authors

Josh Goldstein
Josh A. Goldstein is a research fellow at Georgetown University’s Center for Security and Emerging Technology (CSET), where he works on the CyberAI Project. His research focuses on online manipulation, emerging technologies, and international security. Before joining CSET, Goldstein held pre- and po...

Topics