100 Days of Trump: His Enforcers Are Waging War On Content Moderation. It’s Likely Just The Start.
Cristiano Lima-Strong, Anish Wuppalapati / Apr 30, 2025Cristiano Lima-Strong is an associate editor for Tech Policy Press. Anish Wuppalapati is a Georgetown University McCourt School of Public Policy, Tech and Public Policy Fellow with Tech Policy Press.

BROWNSVILLE, TEXAS - NOVEMBER 19, 2024: Then US President-elect Donald Trump speaks to Brendan Carr, his pick for Chairman of the Federal Communications Commission, as he attends a viewing of the launch of the sixth test flight of the SpaceX Starship rocket. (Photo by Brandon Bell/Getty Images)
During President Donald Trump’s first 100 days in office, his federal enforcers have dramatically expanded their oversight of how tech companies police speech online, a trend that could intensify as the administration tests new ways to target Silicon Valley giants.
The crackdown, spearheaded by Republican leaders at the Federal Communications Commission (FCC) and Federal Trade Commission (FTC), so far has largely manifested in external pressure against so-called “censorship” by digital platforms. Trump, meanwhile,has ordered agencies to cease any contacts with companies that “facilitate” censorship.
But agency leaders have repeatedly raised content moderation as a potential legal matter, raising the prospect of more drastic enforcement action down the line.
Here’s a look at how their campaign has taken shape — and where it may head next:
Pressure on private companies
Brendan Carr and Andrew Ferguson, chairs of the FCC and FTC, respectively, have each publicly pressed tech companies over their moderation practices and demanded that they fork over information about their policies.
Prior to Trump re-taking office, Carr accused the heads of Google, Meta, Apple, and Microsoft of contributing to an “unprecedented” and “improper” surge in online censorship. In the November letter addressed to the CEOs, he appeared to threaten their companies’ liability protections under Section 230 and demanded that they provide information about partnerships with third-party fact-checkers.
Carr has since questioned whether Google-owned YouTube TV has a policy that “discriminates” against faith-based channels and pressed tech companies on how they plan to uphold free speech principles in the face of regulatory pressure from the European Union. Digital rights groups have blasted the actions as “more censorial” than the practices Carr is criticizing.
Similarly, Ferguson launched an FTC inquiry in February into “how technology platforms deny or degrade users’ access to services” based on user viewpoints and whether these practices violate the law. The agency has opened a public comment window until May 21, encouraging social media users to submit instances where platforms “banned, shadow banned, demonetized, or otherwise censored” their content.
Those public comments could serve as fodder for future enforcement actions by the agency.
Ferguson has separately scrutinized Amazon’s decision to ban certain books under its hate speech policies.
Threatening Section 230 protections for content moderation
Carr has repeatedly teased plans to have the FCC target Section 230, the tech industry’s prized liability shield, an action that would mark a major departure from the agency’s traditional focus on the telecommunications sector.
In his chapter of the conservative Project 2025 agenda, Carr said the FCC should reinterpret Section 230 to narrow its scope so that tech companies could not so easily invoke it to defend against lawsuits when they remove or restrict users’ content, a call he has since reiterated as chair. The plan mirrors an executive order Trump sought to implement near the end of his first term. Carr, who was an influential voice in congressional discussions over the fate of TikTok, has said he plans to have the agency work with legislators on Capitol Hill on potential Section 230 reforms.
Carr has also suggested that the FCC has the authority to require digital platforms to disclose more information about their moderation practices under the same law it uses to compel information from broadband providers.
Linking antitrust and censorship
Trump’s top antitrust enforcers have also tied online speech concerns to scrutiny of competition in the tech sector. While that framing has thus far only appeared in their public comments, they have hinted it could eventually serve as the basis for actual legal claims against tech companies.
In particular, both Ferguson and Justice Department antitrust chief Gail Slater have expressed concern about potential collusion from outside groups pressuring tech companies to police content more forcefully, such as advertisers who organize boycotts against platforms.
Ferguson recently suggested such actions could constitute a “classic antitrust violation” as a “concerted refusal to deal” with a platform during a DOJ “Big Tech censorship forum.”
“Drying up access to ideas is an injury to consumers that the antitrust laws care about, and if the wielding of market power unlawfully makes that possible, that is what [federal law] is for,” he said. (Slater, who was seated next to Ferguson, replied: “That’s another ‘amen’ from me.”)
Ferguson and Slater have both said their ongoing antitrust cases against Meta and Google are important for preserving free speech, but so far those arguments have not found their way into the court proceedings.
Still, the Trump administration has already shown it is willing to merge cultural issues with competition oversight, including Carr’s recent remarks that he may block mergers by companies that promote diversity, equity, and inclusion policies.
Authors

