A Building Code for Digital Infrastructures
David A. Broniatowski, Joseph Simons / Mar 16, 2026
Slot machines in a casino. Shutterstock
Across capitals—from Washington and London to Canberra—the same concern keeps surfacing: a few companies shape how billions see information, speak, and organize. Governments are responding. The European Union is enforcing the Digital Services and Digital Markets Acts. Australia recently implemented its world-first Social Media Minimum Age Act, which went into full effect in December 2025. In the United Kingdom, the Online Safety Act has reached an aggressive enforcement phase, requiring platforms to deploy rigorous age-assurance technologies.
In the United States, the response is more fragmented. While federal action remains stalled, a patchwork of state-level age-gating laws has emerged. Simultaneously, in Congress there is a bipartisan push for the Sunset Section 230 Act, a bill that would force the expiration of the internet’s foundational legal "shield" by December 31, 2026.
The shared challenge across all of these jurisdictions is to manage real risks without smothering innovation or restricting free speech. However, many current "solutions" lack evidence. At best, these approaches address the symptoms while ignoring how proposed policies might be constrained by the underlying system design of digital platforms.
The limits of ‘gates’ and ‘shields’
The current debate over tech regulation often tends to rely on two primary levers. One side wants to build higher walls—such as the age-verification mandates sweeping through US statehouses. The other side wants to remove the legal protections of Section 230, effectively suing platforms into submission.
While these strategies are often pursued in tandem, neither approach addresses the architecture of the platforms themselves. Age gates are like prohibiting children from playing in a casino—they might nominally restrict access, but they do not provide a safe alternative, such as a community center. Similarly, rescinding Section 230 changes who is liable if a child is victimized within that environment, but it doesn't actually keep the child safe. It addresses the legal fallout after a tragedy, rather than mandating the architectural changes required to prevent it.
Our research demonstrates why these "access control" measures are likely to fail. In a study of Facebook’s vaccine misinformation policies, we found that while the platform removed specific content, the underlying architecture allowed motivated users to find new pathways, often resulting in content that was more misinformative and polarized than before. We observed a similar structural failure on Twitter during the pandemic, where even aggressive account removals failed to stop skeptical clusters from increasing their virality.
If a platform’s engineering is designed to reward engagement at all costs, simply removing a subset of users—whether they are "misinformation traffickers" or children—will not fix the system. The remaining users will simply adapt to the existing structural incentives, often with higher intensity. There are already widespread reports of teenagers circumventing Australia's age-gates in similar ways.
Community centers, casinos, and codes
A simple analogy helps. Think about the difference between a casino and a community center. Both are buildings where people gather and engage in activities, but they’re designed for very different purposes.
A typical community center is built around the needs of its members. It has open spaces for people to meet, a front desk to welcome visitors, windows so everyone can see what’s going on, and bulletin boards sharing information about classes and events. The space is designed to make people feel comfortable and connected.
A casino is built around extracting money from its visitors. There are no windows, making it easy for people to lose track of time. The building is designed to keep the temperature cold, so you stay awake and alert. Kitchens are placed close to central areas so that food and drinks, often free, are delivered straight to gaming tables, so you never have to leave. The structure is designed so that, when someone wins, the sounds are amplified to grab everyone’s attention, and flashy displays compete for your eye at every turn. And there are a lot of turns—the floor plan is deliberately designed so that you have to walk through the gaming area to get anywhere.
Right now, we are trying to achieve community-center outcomes in a space whose architecture more closely resembles that of a casino, with all its attendant hazards. Small fixes—tweaking recommendations or banning accounts—are like moving the blackjack table farther from the main entrance or banning a small number of notorious cheats. But as harmful behavior scales, these small fixes run out of road. If the goal is safety and free expression at civic scale, the blueprint needs attention. We should be discussing how to design the architecture of digital spaces—the standards.
A building code for the digital age
We should treat large social and AI systems like critical infrastructure and adopt "building codes" for them. Building codes are not suggestions; they are the baseline that ensures a structure is fit for its intended use. The progress already made on developing design codes for social incentives and user experience is encouraging, and points toward a promising next step: developing robust standards that treat system architecture principles as fundamental rather than an afterthought.
This requires a shift in how we think about "transparency." Current regulatory frameworks, such as the EU’s Digital Services Act (DSA), create mechanisms to "open the black box" of social media platforms, mandating researcher access to platform data, audits, reporting, and investigations for which researchers have long advocated.. While these measures are often framed as the gold standard for oversight, they may foster a "compliance” culture—a reactive, forensic exercise that verifies whether a platform is following its own “house rules” after the damage is already done. Furthermore, this process is often fragmented. Under the current DSA model, data access is determined by a tug-of-war between researchers’ specific questions and platforms’ willingness to export pre-existing datasets, with different degrees of access across jurisdictions. Accessing data to see what went wrong is forensic science that reactively assigns blame after the failure has already occurred; we need to proactively inspect the architecture before the disaster.
- Make the blueprint visible: Transparency must go beyond mere documentation. Platforms must disclose what users can expect for how architecture is likely to affect system behavior. Design choices, such as how people join groups, and what different classes of users are allowed, dictate the range of emergent behaviors. These structural consequences of design should be part of a public safety record.
- Inspections, not just research: Data availability must be driven by fundamental engineering questions, rather than being limited to specific thematic areas currently prioritized by trends in academic research. Instead of a system where platforms, potentially in collaboration with researchers, choose which data "crumbs" to drop, we need the digital equivalent of "black box recorders" that provide the structural telemetry of the system. These recorders would capture the standardized, high-fidelity data – such as path-redundancy and node connectivity – evaluated in the context of specific architectural features like group structures and following mechanisms, which might promote content cascades. Qualified, independent inspectors, who understand the relationship between these architectural features and the resulting information flow structures, can verify that a platform is adhering to a broader architectural framework of safety—ensuring consistent, not just episodic, oversight. Tools, such as the now-defunct CrowdTangle and Twitter Academic API, proved that such systems were both effective and technically feasible and could be augmented to include architectural data.
- Design for resilience: Critically, this is not about removing or censoring content; it is about understanding how the system’s design drives structure and behavior regardless of what is being shared. Policy discussions must address when virality of any kind can lead to harmful levels of exposure. Currently, our systems are prone to hijacking: because platforms are optimized to maximize engagement, harmful content and "AI slop" utilize the same high-velocity pathways as product ads and benign compelling content, but it need not be that way. A building code would require platforms to demonstrate that their "corridors" are engineered to mitigate these risks by design, managing the flow of the system rather than relying on the impossible task of policing every individual post.
From crisis to construction
This approach can build on familiar engineering and safety frameworks and existing guiding principles for platform design–such as the Neely Design Code for Social Media; and those developed by the International Code Council (ICC). Courts, agencies, and public procurement already use ICC frameworks. In these systems, technical experts and engineers develop standards based on scientifically-validated best practices. Different jurisdictions could then make independent determinations about the extent to which adherence to these standards are voluntary or required by law. This is also the current approach to national building code standards like the National Electric Code (NEC). Demonstrating compliance with the standard then protects firms from liability and consumers from unacceptable risks.
Right now, we’re stuck in a cycle of crisis and reaction—bans, takedowns, and lawsuits—without fixing the buildings around us. We don’t rebuild a collapsed building to the same unsafe design. We shouldn't rebuild our digital public spaces that way either. Set the standard. Prove you meet it. Update it as we learn. That is how democracies can govern the systems that now so often appear to govern them.
Authors

