How State Tech Policy Can Promote Health Information Integrity: Lessons Learned from the COVID-19 Pandemic
Jennifer John, David Scales / Jan 8, 2025From the rise of generative artificial intelligence to record levels of institutional distrust, the information ecosystem in the United States is facing unprecedented threats. As the prospect of another natural disaster or global pandemic is not a matter of if but when now is the time to proactively develop strategies to protect information integrity. This effort is particularly critical for state governments, which may be required to take a more active role in shaping health communication as federal public health agencies are facing significant uncertainty under the incoming Administration.
During the COVID-19 pandemic, community organizations, researchers, and local and state governments undertook hundreds of communication interventions. Despite the substantial resources dedicated to these efforts, little is known about their impact or effectiveness. In our research, we analyzed a dataset of 379 such interventions through the lenses of three public health frameworks, epidemiological, socio-ecological, and environmental, in an effort to identify the gaps and opportunities. Our findings suggest not only improvements to individual interventions to mitigate potential informational harms but also an overarching blueprint to advance coordination and strategy across the landscape of government information-related efforts. As many states are now monitoring misinformation via live dashboards, their responses to perceived informational harms have serious implications.
It is now common in public health to approach misinformation as if it were an infectious disease through an epidemiological framework, tracking its spread over space and time and responding to “outbreaks” or “infodemics” of rumors. While this paradigm is useful to monitor and mitigate the effects of particularly dangerous forms of misinformation (for example, the highly misleading documentary “Plandemic”), guiding state and local level responses according to one paradigm can limit their effectiveness.
Thus, we examined other approaches, such as a whole-of-society response (also known as a socio-ecological framework), as the US Surgeon General recommended in his 2021 Special Advisory on misinformation. That document also offers recommendations on “building a healthy information environment,” suggesting that an environmental framework drawing on analogous concepts in toxicology and hazard management from environmental health may be a useful strategy to address infodemics. For example, mis- and disinformation are often referred to as “polluting” our information environment, suggesting that the focus should not be on eradicating misinformation—like pollution, it has always been with us and always will—but focusing instead on mitigating the ecological harms.
Through our research, we found four approaches to addressing these potential harms: regulation, funding, facilitating coordination, and strategic leadership and direction. For example, several states have taken steps to manage perceived misinformation and its associated harms through regulation. Some of these laws have faced challenges to implementation, were blocked in court, or repealed, like a California law that allowed the state medical board to revoke the medical license of physicians deemed to be spreading misinformation.
Well-written state regulation offers an opportunity to either incentivize or mandate a much-needed shift in focus from increasing the availability of high-integrity information to decreasing exposure to low-integrity information. Outside of the US, this has been done through public support of independent media. In the US, regulation targeting algorithmic amplification has been suggested, as it is less likely to run afoul of free speech doctrine.
Interventions from digital platforms to curtail the spread of misleading content were under-represented in our dataset, a gap that has only widened as technology platforms’ trust and safety teams face significant cuts or move to “community notes” in place of fact-checking. Thus, there is an opportunity for policy levers to align platform design and recommendation algorithms with goals for a healthy information ecosystem while avoiding further polarizing public perspectives on content moderation.
Similarly, state-level incentives can promote the benefits of generative AI to the information environment, especially on social media, while minimizing its harms. For example, the California AI Transparency Act (SB-942) offers one potential regulatory model. In our dataset, we observed that interventions that leveraged novel technologies, such as chatbots, often did not align with the information needs and behaviors of users and communities. Recent research reveals that generative AI chatbots show promise in decreasing beliefs in conspiracy theories, while they can also be weaponized to promote false claims about health.
States also have a role to play in providing sustainable funding streams to organizations engaged in protecting information integrity. Early on in the COVID-19 pandemic, an explosion of information of widely varying quality was paralleled by increased funding opportunities for infodemic management, defined by the World Health Organization as efforts that address rumors, genuine concerns, and other information-related needs. However, funding was rarely extended beyond the pandemic period. Too often, the metaphor of information going viral clouds policymakers’ visions, mistakenly assuming infodemics ended when COVID-19 spikes diminished.
This lack of continuity compromises trust-building efforts that require longitudinal and intentional engagement with community partners. As a result, the pandemic saw a proliferation of hastily developed resources (like communication toolkits) deployed independently of established stakeholder partnerships. Exemplifying the “Field of Dreams fallacy,” these efforts often failed to reach their intended audiences. Additionally, the impacts of one-off interventions are unlikely to be sustained among those who are immersed in a low-integrity information environment. This pattern thus represents a missed opportunity to develop interventions that are sustainable and proactive.
Sustained funding additionally would allow for the development of coordinated, multi-pronged interventions across multiple layers of society — a notable gap revealed by our analysis. While many of the interventions we examined were directed at individuals, structural interventions and those spanning multiple levels of influence (interpersonal, community, organizational, etc.) may be more effective than those siloed by geography, topic, or field of study. Similarly, few interventions considered the information environment as a whole, instead focusing on one component, such as sources of high-integrity information.
Coordination would also address the fragmentation and duplication of effort among funders and stakeholders we observed in our research. While the urgency of the COVID-19 infodemic facilitated innovation, diversity of efforts, and tailoring of programs to community needs, it also led to redundancy, unstrategic allocation of funds and resources, diminished sustainability, and, likely, reduced effectiveness. The result was a flourishing marketplace of ideas without a pipeline to support successful interventions that could achieve long-term impact.
For example, despite their intended application to inform infodemic responses, many misinformation monitoring (aka social listening) tools appeared to lack coordination with prebunking or other interventions that could effectively utilize the insights they provided. Even now, multiple siloed social listening tools exist in different sectors, including public health, disaster preparedness, or financial services, with little cross-sector communication or data sharing. States can facilitate coordination mechanisms through legislation, funding, or convening relevant stakeholders.
Finally, state-level policy can provide strategic direction regarding theories of change and guiding frameworks, an often overlooked aspect of infodemic management during the pandemic. Infodemic management approaches should intentionally select a paradigm or paradigms, such as the socioecological or environmental health framework, that aligns with the relevant type of information distortion. Circulating rumors may demand different responses depending on the topic, emotional valence, degree of uncertainty, and extent of spread within an information environment.
For example, some disinformation may employ narratives that undermine trust rather than emphasizing falsehoods, leading to greater uptake in communities already prone to distrust due to histories of marginalization. In this case, the dominant epidemiological model that considers rumors a contagion will likely struggle to detect the most consequential information distortions. On the other hand, an environmental approach would point toward funding cross-sector collaborations, media, public health, government, and other stakeholders to counteract the pervasive harms that can come from misinformation and mistrust. These paradigms can also inform the intended audience of an intervention, an important component of developing an effective approach that was often unclear or poorly defined in many of the interventions in our dataset.
The challenges ahead for infodemic management in the United States and globally are significant, making it all the more important to learn from past approaches. Policymakers at the state level can proactively prepare for future infodemics through policies that incentivize intentional applications of novel technologies, provide sustained funding for coordinated efforts, facilitate coordination, and promote strategic alignment with frameworks that best fit their communities' challenges.
This post is part of a series examining US state tech policy issues in the year ahead.