A Realist Perspective on Trust & Safety
Dean Jackson / Jul 21, 2025Dean Jackson is a contributing editor at Tech Policy Press.
In 2021, two ex-Facebook employees founded a nonprofit devoted to “integrity workers” like them—the Integrity Institute. The next year, the two-year-old Trust and Safety Professional Association launched “TrustCon,” the “First Global Conference for T&S Professionals.” It took place just before the first Trust and Safety Research Conference, which gathered a mix of industry and academic researchers at Stanford.
Trust & Safety was having its day in the sun. At Tech Policy Press, we marked the occasion with a podcast titled “Trust and Safety Comes of Age?” Around the world, civil society groups were eager to collaborate with industry professionals who appeared to share many of their objectives and could work to reform tech companies from within—a partnership that was already well underway in many cases. This inclination continues to this day: as a February 2025 Global Network Initiative workshop report put it, the T&S and digital rights communities “both focus on protecting online users from risk and harm, and therefore have many interests in common.”
However, this relationship between Big Tech and civil society is increasingly ineffectual. It has long labored under the assumption that—through enlightened self-interest, market pressure, or regulatory demands—social media companies can be incentivized to invest in best practices related to online safety, privacy, election integrity, and human rights. Each news cycle throws those assumptions deeper into doubt.
Move faster, break more things
There was an almost immediate hangover after Trust & Safety’s 2021-2022 coming-out party. Tech companies, crashing off a post-pandemic stock market sugar high, announced massive layoffs, which thinned the ranks of Trust & Safety teams. Elon Musk gleefully kicked off the trend after purchasing Twitter in late 2022, but other companies soon followed. By March 2023, law professor Kate Klonick declared “the end of the golden age of tech accountability” as tech executives began to treat T&S teams as cost centers, not assets. In June that year, the journalist Casey Newton asked, “Have we reached peak trust and safety?”
Not coincidentally, this retrenchment came after more than a decade of mounting political criticism. Years of pressure to moderate content faster and better, what some dubbed the “techlash,” generated its own backlash from the right, which accused platforms of anti-conservative bias and censorship. These claims were more smoke than fire, as there was little in the way of evidence to support them, but they were nonetheless effective.
With figures like Elon Musk and Rep. Jim Jordan (R-OH) leading the charge, the right waged a campaign of relentless media attacks, lawsuits, Congressional inquiries, legislative efforts, and outright threats against social media companies and their executives. Industry leaders who advocated for more action on issues like misinformation and violent incitement faced professional, legal, and even physical risk; meanwhile, those who took a more hands-off approach ascended.
Ultimately, the right’s pressure campaign succeeded. After the 2024 election, tech executives either caved or, sensing an opportunity, aligned themselves with the incoming Trump administration. Companies such as Meta abandoned best practices once informed by partnerships with civil society. They rolled back hate speech policies, dropped fact-checking commitments to emulate X’s Community Notes, and replaced human-led impact assessments with AI using large language models. Google followed suit, albeit more discreetly, including its own apparent retreat from fact-checking. Both companies also persisted with the rollout of AI chatbots to teen users despite growing evidence of harmful outcomes, including a case of suicide after alleged encouragement from an AI companion app and, in Meta’s case, a scandal involving minors and sexual content.
These examples are drawn mainly from the United States and Europe, where investments in moderation have historically been highest. Abroad, things are much worse. For example, a recent Oversight Board ruling on AI-generated election disinformation in Iraq called Meta’s approach “incoherent and unjustifiable."
Flawed assumptions about Trust & Safety
Given this new reality, civil society groups must now adopt a more realist lens.
For years, civil society believed that collaborating with (and often applying pressure on) aligned teams inside tech companies would encourage companies to co-develop and apply best practices for protecting human rights and democratic processes online. The assumption was not unfounded; it was based on a time and place when companies like Meta publicly grappled with their roles in society and contributions to serious offline harms, including a genocide in Burma.
But that assumption is less convincing today, not because of T&S workers themselves, but because their influence has eroded alongside the rest of the market for tech workers. Take Mark Zuckerberg’s 2020 call with Donald Trump to discuss the President’s incendiary post about the Black Lives Matter protests, where he told the President, “I have a staff problem,” in reference to internal pressure from Facebook employees for the platform to take action. Two years later, many of those staff were let go; four years later, Zuckerberg appears committed to replacing much of the remainder with artificial intelligence.
Trust & Safety professionals themselves know the landscape has shifted. In an April 2025 study powerfully titled “The End of Trust and Safety?” University of Washington researchers interviewed twenty people working in the field. Their discouragement is palpable. Around elections, for instance, one participant said that “my theory is that a number of companies are trying to test the boundaries of how little moderation they can do.” Another said that an unnamed tech CEO is “incentivized to maximize shareholder value. That is his job. That is almost his only job. And so he’s not going to make decisions that focus on doing the right thing unless they also maximize shareholder value.”
Civil society leaders and T&S professionals recognize this, and many still share the hope that content moderation and other best practices still make business sense. “I believe T&S work is not a cost center. It is a profit enabler,” said one of the study’s participants. Or, as another put it: “You need a T&S team. Coca-Cola is not going to run an ad next to a white supremacy post.”
This line of thinking is realist in that it appreciates the corporate sector’s inherent motivations and instincts. However, it overestimates the field’s influence. Consider Elon Musk’s takeover of Twitter (now X). After slashing the platform’s Trust & Safety teams and making the site more friendly to far-right content, revenue fell as users and advertisers fled the site. But today, X’s ship seems to have been righted. There is some evidence that advertisers are returning, and the company’s valuation appears to be improving. Alternatives like Threads and Bluesky surged briefly, but have not displaced X.
Other tech executives, while not as brazen as Musk, seem to be taking notes. After Zuckerberg’s controversial rollback of hate speech policies in January, Meta’s stock hit an all-time high. As of this writing, it’s on the way to surpassing that former peak, perhaps buoyed by Zuckerberg’s plans to achieve “superintelligence.”
It may be that some platforms are too big to fail—that the audience eager for, or willing to tolerate, toxicity is big enough to keep them afloat. This certainly appears to be the case for X, which has become one of, if not the biggest, hubs for the far right. Paradoxically, as the social media landscape fractures across an increasing number of platforms, individual companies may also face less public pressure than before.
A final possibility is blunt and mathematical: perhaps runaway investment in artificial intelligence will compensate for any business lost to T&S cutbacks. In this view, social media platforms are just assets to be milked for cash flow to fund the development of AI and to provide more training data for it. Whatever the reason, the largest social media platforms are more profitable while backpedaling on T&S commitments. Advocates’ prevailing theory of change is failing.
A civil society schism
Some readers may argue that new developments, such as the rise of investor-backed vendors who are cultivating their own AI and expertise or open source toolsets like ROOST will fill market gaps for T&S services and software. These innovations may benefit smaller companies that lack resources and capacity, but they’re no remedy for a lack of willpower. It doesn’t matter if Meta can detect hate speech if its policies now permit such content.
When tech executives rewrite rules for political expediency and financial gain, the problem cannot be treated merely as a failure of policy or process. Rather, the usual model of proposing solutions and best practices is inadequate in a world where tech companies have aligned themselves with illiberal governments. In many countries, content moderation has become a means of enforcing government fiat, and companies that once viewed transparency as their best defense against authoritarian demands now seem inclined to comply while fighting tooth and nail against regulation by democratic governments.
This dynamic exposes a divide within civil society. Some still hope to reform a system that could work if transparency, user safety, and digital rights were recognized as intrinsic to Big Tech’s business model—or if those values were mandated by regulation. Others see a system that surveils, censors, and amplifies propaganda on behalf of illiberal strongmen, for the benefit of a new oligarchic class, and ask: does it deserve to be reformed?
A realist assessment of the current moment suggests that one force capable of moving tech titans in a better direction—perhaps the only force short of a mass consumer movement—is state power. Just as platforms have toed the line for leaders like Trump, Indian Prime Minister Narendra Modi, and Turkish President Recep Tayyip Erdoğan while slow-walking or fighting regulatory compliance elsewhere, they have complied with forceful court orders in countries like Brazil, where—faced with a potential coup d’etat and threats to assassinate sitting officials—the Supreme Court has played hardball, threatening to outright ban Musk’s X
There are dangers in this approach. First, in places where democracy is collapsing or already in ruins, democratic forces cannot count the state as an ally. In the remaining liberal democracies, free expression advocates—especially those in the US—will be deeply uneasy with the idea of state intervention—even if an unregulated tech sector accelerates democracy’s demise.
Some of these critics describe the current crisis as the result of the collapse of state and private power into a singularity that will crush public dissent. But warnings that state intervention will bring about this gravitational collapse miss an important point. Power is fungible; democracy and the rule of law are aberrations, and without them, economic and political power become indistinguishable. In democratic societies, state intervention by representative government is an important means through which the public can assert its will and interests against rapacious capital. The absence of thoughtful and timely government intervention is a significant factor in the merger of state and corporate power. A disarmed state cannot defend itself from oligarchy, its natural predator.
First steps forward
Perhaps some countries will find ways to exert pressure on Big Tech without compromising on free expression. Here, all eyes are on Europe, and whether its experiments with digital regulations will produce substantial positive results without producing too many negative ones. While many international observers have cheered Europe’s comprehensive regulation in areas where other governments have failed to act, some warn that the reduction of Trust & Safety to a compliance function will lead companies to prioritize problems it can measure over other, more difficult challenges. European regulations are also threatened by right-wing pushback led by the Trump administration.
Other countries with online safety legislation, such as the UK, may also show results, but the evisceration of US foreign aid has left a deep wound in the global digital rights community elsewhere, and there are often too few resources in most governments to really address tech regulation. It remains to be seen how well most nations will be able to mobilize and effect change in this new reality.
Meanwhile, US tech policy advocates are confronting a dead end. A decade of lost time hangs over them, with some trapped in an endless loop of commentating on their own defeat. Still, if hope is not a plan, then cynicism is not a strategy. The first step is to name the problem: in most contexts, advocates simply cannot match the political and economic power of today’s tech empires. Building that power will require a leap forward in organizing and communication, but many professionals in the field entered it seeking to become scholars or policy analysts, not digital influencers or street-level activists.
But the front lines are shifting. At least two paths forward are visible for those who can adjust. The first recognizes that in the cold, realist calculus of big business, consumers always wield some power. Perhaps the next chapter for tech advocacy involves convincing the public that they have given more to these digital platforms than they’ve received, much like gamblers at a slot machine. But just as people have not fled the world’s casinos in numbers large enough to hobble them, they might not walk away from today’s social media platforms, either.
The other path leads to more direct confrontation with tech’s new ruling class, in alliance with a progressive movement that sees concentrated wealth—like concentrated power—as anathema to political liberty. If history is any guide, progressivism is often most successful in times of calamity. Only time will tell if the end of the Pax Americana will invoke such a crisis; but if it does, digital rights activists should be ready with a plan and a coalition that do not rely on industry’s better angels.
Authors
