New York Could Show Cities a Better Way to Work With Big Tech on Safety
Nikhil Jain / Jan 12, 2026
New York City Mayor Zohran Mamdani hosts his first new media press conference as mayor and conducts a brief tour of City Hall. Wednesday, January 7, 2026. Credit: Ed Reed/Mayoral Photography Office.
In his inauguration speech at the start of this year, Mayor Zohran Mamdani made a promise to govern New York City “expansively and audaciously.” He brings a high-profile team to lead this charge in a City Hall that he says will not hesitate to use its power to improve the lives of New Yorkers. With notable appointments such as Lina Khan, who challenged dominant technology firms as chair of the Federal Trade Commission, the Mamdani transition team has the expertise to expand the mayor’s technology governance agenda.
Regulating technology platforms can be complex at any level of government. While public support for Big Tech regulation has grown as concerns like social media addiction and AI-related risks have come to light, US federal and state policymakers often fail to align on governance approaches (take AI regulation or platform transparency for instance). Local governments, as the closest form of democratic representation for many Americans, are sometimes better positioned to achieve ambitious technology governance goals, especially when the technology in question creates distinctly local impacts. Some, like New York City, have pioneered local laws in algorithmic governance, but they will need a better framework when using a more controversial lever: targeted interventions in online content moderation.
Cities increasingly seek to get involved in online content moderation
New York has a brief but public history of taking on social media platforms for claims of harm related to online content, exemplified by former mayor Eric Adams’ campaigns against “subway surfing” videos on TikTok and Instagram. With subway surfing deaths more than tripling in the last three years, city officials link this surge to social media virality. While young New Yorkers have been subway surfing for decades, local authorities argued in press conferences that algorithmic recommendations have newly encouraged children and youth to subway surf and post videos online. Through these public pressure campaigns, city officials successfully pushed online platforms to remove thousands of videos, amplifying a chorus of concerned parents whose lawsuits against these platforms (e.g., Nazario v ByteDance Ltd.) are currently being considered in court.
New York is not the only city to challenge Big Tech platforms on online content. San Francisco’s City Attorney issued cease-and-desist letters to online marketplaces like eBay and Amazon to remove listings of license plate covers that were impacting toll revenue. San Jose’s mayor used local news and media channels to try pressuring social media platforms to remove content promoting car sideshows. And as recently as November 2025, a group of school districts joined a coalition suing platforms for contributing to a youth mental health crisis, citing addiction as a public health nuisance. In all of these cases, authorities tied online content to local governance concerns like toll collection, noise ordinances, and public health.
But local interventions in platform content moderation are rarely enforceable. While cities can regulate sharing economy platforms like Uber, Airbnb, and Doordash through local legal mechanisms such as zoning and rental rules, they face barriers in regulating speech, in this case user-generated content, across platforms. Section 230 of the Communications Decency Act shields platforms like TikTok and Instagram from liability for most third party content, making it extremely difficult for state and local lawsuits to proceed. Even if a legal challenge pierced this immunity, Big Tech companies have large, well-resourced legal teams with specialized expertise in speech law, dwarfing the capacity and budget of local government legal teams, many of which tend to specialize in local law domains.
Legal challenges aside, government intervention in online content can also hamper civil liberties. When London’s Metropolitan Police Service formed a partnership with YouTube to flag UK drill music videos that they claimed would contribute to local gang violence, civil society groups and journalists sounded the alarm on censorship and bias. In New York, Adams pursued similar content actions, using public convenings to call on YouTube to remove drill and rap artists’ videos as part of a public safety agenda that critics shot down as muzzling urban creative expression. On a more extreme scale, authoritarian regimes around the world influence content moderation systems to suppress all kinds of dissent online, demonstrating the risk of governments abusing moderation partnerships for political gain at the cost of free expression.
Do we really need cities to intervene?
Some would argue that government intervention is altogether unnecessary. Most tech giants self-govern user content through dedicated teams, typically under a trust and safety function, that develop detailed platform policies, enforcement and appeals workflows, content escalation systems, and investigation mechanisms for high-priority cases like credible threats and brand safety issues. If YouTube, Spotify, and Apple Music allow certain drill songs that local authorities flagged for potential to incite violence, it is likely because these platforms assessed the tradeoffs and determined that the artistic value outweighed any risk of it contributing to harm. In fact, some platforms go a step further to establish external review committees, like Meta’s Oversight Board, to increase accountability in their content decisions. If platforms’ self-governance is to be trusted, cities should have no reason to intervene.
However, even the most well-intentioned trust and safety teams can fall short. Since these platforms operate globally, it can become impractical for internal teams to constantly modify platform policies in response to local incidents. Even with well-calibrated policies, most platforms outsource content moderation to contractors around the world who may lack the local context to identify veiled threats, coded language, and on-the-ground sociopolitical conditions, particularly when handling long moderation queues with short turnaround times. A moderator in another country may only have a few seconds to determine if a flagged image with questionable election information for a small US town is part of a misinformation campaign.
Add to this the emergence of AI-augmented harms like fake images circulating online during the January 2025 California wildfires or misleading 911 deployments from “AI intruder” pranks. In New York, just hours before Mayor Mamdani’s inauguration speech, a large crowd gathered in Brooklyn Bridge Park to watch a New Year’s Eve fireworks show that was not actually scheduled to happen, apparently misled by AI-generated videos on Instagram and TikTok. Platforms now need to align their moderation workflows not only to rapidly-evolving local incidents but also to synthetic content that is becoming impossibly hard to detect, all on a global scale.
In 2025, the online governance gap further widened with federal officials and Big Tech CEOs together rolling back support for important trust and safety functions. The year kicked off with Mark Zuckerberg’s announcement of major reductions to content moderation operations at Meta and ended with the US State Department’s memo to staff to reject visas for workers involved in fact checking, content moderation, and any other online safety activities. Trust and safety leaders argued that the conflation of these activities with censorship is alarming and misses the critical, life-saving work that these workers do, especially in combatting CSAM (child sexual abuse material), fraud, terrorist content, and more.
So perhaps cities can and sometimes should get more involved in online safety, particularly when the harms manifest locally. In more collaborative models, this might look like local authorities serving both as information hubs and first responders to flag harmful material to platforms in response to threats. Taking the Los Angeles wildfires example, local authorities could rapidly inform platforms which AI-generated images are recommended for labeling or, in some cases, removal, while platforms in turn could boost authoritative local content on their feeds. While this already happens both informally (e.g., city officials contacting platform employees ad hoc) and formally in some cases through third party flagger systems (e.g., the London Met and YouTube), a content intervention protocol could improve the collaboration through structure, speed, and public accountability.
In more contested models, cities might turn to legal mechanisms. While US cities can already use federal and state law to sue platforms for online content (e.g., San Francisco’s recent lawsuit against nonconsensual deepfake porn websites), they can also develop local laws that become models for technology governance more broadly (e.g., New York City’s local legislation in auditing public-use algorithms and automated employment decision tools).
A proposed framework for cities to work with platforms
Given the constraints and risks of platform involvement, local governments can consider a narrow set of criteria to determine if recommending a content action is appropriate. As a starting framework, I would propose:
| Criteria | Example |
|---|---|
| 1. The flagged content and its impacts should be recognizably local and empirically observable in the municipality. | A documented cluster of subway surfing incidents in New York City that correlate with specific videos circulating online. |
| 2. Local officials should be better positioned to address the harm than federal or state officials. | A set of incidents related to nonconsensual AI-generated content in public schools, since local school authorities are more closely involved. |
| 3. The method of intervention should clearly connect to a local governance domain, such as public health, housing, education, or transportation. | A city fire department or emergency management agency asking platforms to label or downrank AI generated images that spread false evacuation information during a local disaster. |
| 4. The intervention should not unduly burden the company to develop a patchwork of localized platform instances. | A request that platforms apply a consistent labeling policy to a specific category of content, rather than building a separate recommendation algorithm for each city. |
| 5. The content sharing should not cause privacy violations. | Platforms sharing hashed versions of known abuse imagery or deepfake files with local law enforcement, without exposing user identities. |
Verifying a content intervention against these criteria can help local governments ensure that it is within their scope to address, feasible for a platform to implement, and protective of residents’ civil liberties.
Under Mayor Mamdani, New York City has a timely opportunity to trailblaze new models of collaboration and governance between local governments and Big Tech platforms. As home to hundreds of thousands of employees of these platforms and a rapidly growing hub for AI startups, the city has both the expertise and the market power to pioneer a model that respects freedom of speech while providing officials with tools to address emerging local harms. In a time of rapid technological change, cities should not underestimate their capacity to leverage public-private partnerships and shape tech policy.
Authors
