Home

Donate

September 2024 US Tech Policy Roundup

Rachel Lau, J.J. Tolentino, Gabby Miller, Ben Lennett / Oct 1, 2024

Rachel Lau and J.J. Tolentino work with leading public interest foundations and nonprofits on technology policy issues at Freedman Consulting, LLC. Ben Lennett is Managing Editor of Tech Policy Press, and Gabby Miller is staff writer at Tech Policy Press.

September 18, 2024: US House Energy and Commerce Committee Chair Cathy McMorris Rodgers (R-WA) presides over a markup hearing.

With the start of the fall term, US tech policy had an active September, including developments in two important court cases and progress in Congress on child safety and AI. Here are some highlights:

  • The House Energy and Commerce Committee voted to advance KOSA and COPPA 2.0.
  • An appeals court heard oral arguments in a case challenging the constitutionality of a federal law that would force ByteDance, the parent company of TikTok, to divest the company. The Department of Justice also presented its case against Google for allegedly monopolizing the digital advertising market.
  • The White House released new guidance on AI reporting for the federal government, and the Federal Trade Commission released a report examining how major social media and streaming services collect data on and surveil consumers to capitalize on their personal information.
  • Nine AI-focused bills advanced out of the House Committee on Science, Space, and Technology.

Read on to learn more about September developments in US tech policy.

Online Child Privacy Legislation Advances in the House Despite Significant Changes

Summary

The Kids Online Safety Act (KOSA) and Children and Teens’ Online Privacy Protection Act (COPPA 2.0) were once again the center of debate in Congress as online child privacy legislation makes another push in the House. On September 18, the House Energy and Commerce Committee voted to advance KOSA (H.R. 7891) and COPPA 2.0 (H.R. 7890) by voice vote, despite debates over last-minute changes made to KOSA. Taken together, the bills would grant the federal government more regulatory oversight to protect users under the age of 18 and enact significant new requirements for platform companies. Senate versions of each bill overwhelmingly passed the chamber in July; however, the House version of KOSA, which made it out of committee, has raised significant concerns among representatives from both parties. The amended version of KOSA, led by Rep. Gus Bilirakis (R-FL), altered KOSA’s “duty of care” provision by removing language for mitigating “anxiety, depression, eating disorders, substance use disorders, and suicidal behaviors” and replacing it with much broader concerns over the “promotion of inherently dangerous acts that are likely to cause serious bodily harm, serious emotional disturbance, or death.” Rep. Kathy Castor (D-FL), who co-led Rep. Bilirakis’s amended version of KOSA, admitted that the changes offered a “weakened” version of the bill “with the aim of passing it to a full House vote.” Rep. Castor also noted that COPPA 2.0 passed out of committee “generally intact” and should be easier to advance than KOSA. The fate of both bills remains uncertain as the House recessed on September 26 without voting on either bill and will not return until November 12 following the upcoming election.

Stakeholder Response

The advancement of KOSA and COPPA 2.0 has sparked debates regarding the bills’ efficacy and potential unintended consequences. Prior to the committee markup, Senate Majority Leader Sen. Chuck Schumer (D-NY) and KOSA co-author Sen. Marsha Blackburn (R-TN) called for the House to advance KOSA and COPPA 2.0, stating confidently that the bills “would pass by the same overwhelming margin” that was seen in the Senate. House Energy and Commerce Chair, Rep. Cathy McMorris Rodgers (R-WA), and other Republican leaders claimed that KOSA would bring a new era of “accountability and safety” to the internet and vowed to work with other members of the House to address concerns over the bill. ParentsTogether, a parent-led education and advocacy nonprofit, met with dozens of lawmakers ahead of this month's markup, urging them to support KOSA. The group also received more than 100,000 signatures on a petition supporting the bill. Fairplay released a statement supporting the advancement of KOSA and COPPA 2.0 and specifically named KOSA as the “most important new law to protect children online in more than 25 years.”

Others opposed the legislation. NetChoice released a statement calling on the House Energy and Commerce Committee to “outright reject” KOSA and COPPA 2.0, claiming that the bills represent “false promises to families and will not make a single American safer online.” The Computer and Communications Industry Association released a statement suggesting that the bills create “overbearing and ambiguous standards” that would lead to a less safe internet. The Chamber of Progress sent a letter to Rep. Frank Pallone (D-NJ), ranking member of the House Committee on Energy and Commerce, raising concerns that KOSA may be used as a means of “further imperiling reproductive rights” and could be utilized to attack women seeking reproductive care. The Electronic Frontier Foundation published an article raising concerns over KOSA’s updated “duty of care” language and how it might impact youth access to information and supportive online communities.

What We’re Reading

  • Gabby Miller, “Pair of Child Online Safety Bills Advance Out Of House Energy and Commerce Committee,” Tech Policy Press
  • Sarah Jeong, “Congress moves forward on the Kids Online Safety Act,” The Verge
  • Mike Isaac and Natasha Singer, “Instagram, Facing Pressure Over Child Safety Online, Unveils Sweeping Changes,” New York Times

Summary

On September 16, the District of Columbia Court of Appeals heard two hours of oral arguments in TikTok Inc. v. Merrick Garland. TikTok and a group of creators filed the initial complaint challenging the constitutionality of a law that would force TikTok’s China-based parent company, ByteDance, to sell the company or face a ban from app stores in the US. President Joe Biden signed the divestiture law, or the Protecting Americans From Foreign Adversary Controlled Applications Act (H.R.7521), in April 2024, and absent an injunction or stay from the court, it will go into effect on January 19, 2025 – just one day before the new president is inaugurated.

TikTok’s initial complaint argues that a “qualified divestiture” is neither commercially, legally, nor technically feasible within the 270-day compelled timeline. It also accuses the US government of circumventing the First Amendment by invoking national security. During Monday’s oral argument, the plaintiffs’ counsel emphasized this argument, asserting that the law should be subject to strict scrutiny, a standard that the government cannot meet since it is based on “the possibility of future Chinese control” rather than any current threat. The plaintiffs further argued that even if TikTok were owned by a Chinese company, and its content moderation decisions were being made abroad, the law would still suppress US users’ right to free expression. In contrast, the US government focused its argument on a data security rationale, emphasizing the sensitive data TikTok collects from its American users. The government claimed that the data could be “extremely valuable” to a foreign adversary seeking to influence Americans’ views or exploit them as an intelligence asset, and asserted the law as having “nothing to do with protected speech by American citizens.”

Stakeholder Response

Despite overwhelming support in Congress, many internet freedom and free speech organizations and scholars have expressed opposition to the act. Constitutional law scholars from 20 universities filed an amicus brief in the case calling the government’s effort “a tactic to suppress speech, both because it deprives participants in the TikTok marketplace the ability to freely exchange ideas and prohibits companies the right to decide for themselves what speech products to offer the marketplace.” The Knight First Amendment Institute’s amicus brief on behalf of TikTok, argued that “the First Amendment does not permit the government to ban access to foreign media where, as here, less restrictive means are available.” EFF, and several other organizations, also filed an amicus brief in support of TikTok, arguing that the act “directly restricts protected speech and association, deliberately singles out a particular medium of expression for a blanket prohibition, and imposes a prior restraint that will make it impossible for users to speak, access information, and associate through TikTok.”

In support of the US government, a group of former national security officials, including former President Trump’s AG, Jeff Sessions, filed an amicus brief arguing that “the Act is a lawful exercise of Congressional authority to protect national security and that it does not run afoul of the First Amendment or any other Constitutional proscription.” Another amicus brief from 21 states led by Republican governors, including Montana and Virginia, argued that “allowing TikTok to operate in the United States without severing its ties to the Chinese Communist Party exposes Americans to the risk of the Chinese Communist Party accessing and exploiting their data.” In addition, an amicus brief on behalf of big tech critics, including Zephyr Teachout and Matthew Stoller, stated that “if this Court rules in favor of Petitioners [TikTok], it would open the door for known corporate affiliates of the Chinese, Russian, North Korean, and Iranian governments to weaponize our Constitution to spy on our population.”

The broader public is also divided on the subject. A survey by the Pew Research Center, conducted between July 15 and Aug. 4, 2024, found declining support among the US public for banning TikTok, with 32% in support, down from 50% based on a March 2023 survey. Similarly, a Tech Policy Press/YouGov voter poll conducted between June 28 and July 1, 2024, found respondents were divided on the ban, “with 44% believing that banning TikTok would benefit society, compared to 33% who did not. 23% of respondents did not know.” The poll results showed a modest partisan split, with “53% of Republican respondents favor[ing] the ban, compared to 36% of Democratic and 40% of Independent respondents.”

What We’re Reading

  • Anupam Chander and Gautam Hans, "Key Questions in the US Government’s Case Against TikTok,” Tech Policy Press
  • Milton Mueller, “Banning TikTok: A Self-Inflicted Wound on Liberal Democracy,” Tech Policy Press
  • Tim Bernard, “Eight Notable Themes from US Court of Appeals Oral Arguments In Challenge to TikTok Law,” Tech Policy Press
  • Gabby Miller, “Transcript: TikTok Inc. v. Merrick Garland Oral Arguments in the DC Court of Appeals,” Tech Policy Press

Tech TidBits & Bytes

Tech TidBits & Bytes aims to provide short updates on tech policy happenings across the executive branch and agencies, Congress, civil society, industry, international governance, and courts.

In the executive branch and agencies:

  • The White House released new guidance on AI reporting for the federal government; however, the guidance includes a carve-out for the Department of Defense and other “agencies in the Intelligence Community.”
  • The White House announced new voluntary commitments by private sector actors for combating image-based sexual abuse, particularly relating to AI and including both non-consensual intimate images of adults and child sexual abuse material. Relatedly, a civil society working group organized by CDT, the Cyber Civil Rights Initiative (CCRI), and the National Network to End Domestic Violence (NNEDV) published principles on fighting image-based sexual abuse.
  • The Federal Trade Commission (FTC) published a staff report examining how major social media and streaming services collect data on and surveil consumers to capitalize on their personal information. The report found that the data practices can threaten consumers’ privacy and expose them to harms like identity theft and stalking. It also found that the companies do not “adequately protect children and teens.”
  • The Department of Justice sent subpoenas to Nvidia and third parties in an investigation exploring the company’s position in the AI computing market.
  • The Department of Commerce announced four new members in the National AI Advisory Committee, including Aneesh Chopra, Chief Strategy Officer of Arcadia; Christopher Howard, executive vice president and chief operating officer of Arizona State University; Angie Cooper, Executive Vice President of Heartland Forward; and Beth Cobert, former President of the Markle Foundation.
  • The National Telecommunications and Information Administration (NTIA) launched a request for comments on how “federal policy can support the growth of U.S. data centers to meet the coming demand from artificial intelligence (AI) and other emerging technologies.” Comments are due November 3, 2024.
  • The Equal Employment Opportunity Commission (EEOC) published a report on diversity in the tech sector from 2014-2022, finding that the tech workforce has become more racially and ethnically diverse, but that “Black, Hispanic, and female workers remained substantially underrepresented in the high tech workforce.”
  • The Government Accountability Office (GAO) published a report on agency implementation of the October 2023 executive order on AI, finding that “all 13 of the selected AI management and talent requirements contained in the relevant Executive Order were fully implemented.”

In civil society:

  • Over 140 tech, immigrant rights, and other civil society organizations sent a letter to the Department of Homeland Security calling for the agency to stop the use of AI “for immigration enforcement and adjudication that do not comply with federal requirements for responsible AI.”
  • Public Citizen published a tracker on intimate deepfake state bills and legislation.
  • The Center for Democracy & Technology published a report on how AI chatbots could impact voting and election integrity for voters with disabilities. The report found that over a third of answers provided by chatbots on elections had incorrect information.
  • Seven technology policy civil society organizations sent a letter to the FTC on headhunting and other aggressive hiring practices in the AI industry.
  • The National Association of Attorneys General sent a letter to congressional leaders in support of mandatory “warning labels” on social media platforms to mitigate harms against young people.
  • The Institute for AI Policy and Strategy released a report analyzing how leading AI companies, including Anthropic, Google Deepmind, and OpenAI, have conducted their technical research on “safe AI development.”
  • Data & Society provided an update and shared learnings after more than a year since the launch of their Algorithmic Impact Methods Lab (AIMLab).
  • The Leadership Conference on Civil and Human Rights published a legislative brief on federal AI and civil rights safeguards as well as a survey on state data privacy and civil rights laws.

In Congress:

  • The House Science, Space, and Technology Committee passed nine AI bills to the full House (see Other Legislation Updates below for more information).
  • Senator Ted Cruz (R-TX) sent a letter to the RAND Corporation seeking information on the organization’s role in the creation of the AI Executive Order and the federal government’s “censorship of Americans’ speech” through counter-misinformation and disinformation efforts.

In industry:

  • Meta introduced Instagram Teen Accounts, which automatically provide built-in protections for teen accounts and require users under the age of 16 to have parental permission to change any automatic protections.
  • A group of 50 state broadcasters associations published a letter to congressional leadership in support of the Support the Nurture Originals, Foster Art, and Keep Entertainment Safe (“NO FAKES”) Act of 2024 (H.R. 9551 / S. 4875).
  • Meta published a report on the impact of AI on content moderation and misinformation, including key lessons for the industry, challenges for content moderation, evaluations of how automation governs platforms, and a set of case studies.
  • OpenAI announced a plan to transition from a non-profit to a for-profit structure, giving Chief Executive Sam Altman equity in the company in the midst of leadership turbulence at the company.
  • Mozilla released a “vision for Public AI” that includes “a robust ecosystem of initiatives that promote public goods, public orientation, and public use throughout every step of AI development and deployment.”

In the courts:

  • The trial in US v. Google, LLC kicked off with the Department of Justice presenting its case against Google for monopolizing multiple digital advertising technology products in violation of Sections 1 and 2 of the Sherman Act.
  • The creator of a deceptive AI-generated video that involved Vice President Harris and was promoted by Elon Musk on X filed a complaint challenging the constitutionality of two California state AI laws signed by the governor, A.B. 2839 and A.B.2655, that restrict the sharing of deceptive content and deepfakes related to elections.
  • The State of New Mexico filed suit against Snap Inc., alleging that Snapchat, its popular social media app, was designed to attract and addict young people; openly fosters and promotes “illicit sexual material involving children;” and facilitates “sextortion” and the trafficking of children, drugs, and guns.

Other Legislation Updates

The following nine bills advanced out of the House Committee on Science, Space, and Technology as part of a full committee markup on Sept. 11. The bills focused on developing and using artificial intelligence (AI) in a safe and trustworthy manner and providing resources to research and standards efforts.

  • AI Advancement and Reliability Act of 2024 - H.R.9497. Introduced by Reps. Ted Lieu (D-CA), Frank Lucas (D-OK), and Zoe Lofgren (D-CA), this bill would “amend the National Artificial Intelligence Initiative Act of 2020 to establish a center on artificial intelligence to ensure continued United States leadership in research, development, and evaluation of the robustness, resilience, and safety of artificial intelligence systems, and for other purposes.” It was favorably reported to the House by voice vote as amended.
  • AI Development Practices Act of 2024 - H.R.9466. Introduced by Reps. James Baird (R-IN) and Ted Lieu (D-CA), this bill would “direct the National Institute of Standards and Technology (NIST) to catalog and evaluate emerging practices and norms for communicating certain characteristics of artificial intelligence systems” related to “transparency, robustness, resilience, security, safety, and usability.” It was favorably reported to the House by voice vote.
  • CREATE AI Act of 2023 - H.R.5077. Introduced by Reps. Anna Eshoo (D-CA), Michael McCaul (R-TX), Don Beyer (D-VA), and Jay Obernolte (R-CA), this bill, formally known as the Creating Resources for Every American To Experiment with Artificial Intelligence Act of 2023, would “establish the National Artificial Intelligence Research Resource.” It was favorably reported to the House by voice vote as amended.
  • Small Business Artificial Intelligence Advancement Act - H.R.9197. Introduced by Reps. Mike Collins (R-GA) and Haley Stevens (D-MI), this bill would “require the Director of the National Institute of Standards and Technology (NIST) to develop resources for small businesses in utilizing artificial intelligence.” It was favorably reported to the House by voice vote as amended.
  • Expanding AI Voices Act - H.R.9403. Introduced by Reps. Valerie Foushee (D-NC) and Frank Lucas (R-OK), this bill would “support a broad and diverse interdisciplinary research community for the advancement of artificial intelligence (AI) and AI-powered innovation” through partnerships with higher education institutions to expand AI capacity in historically underrepresented populations in STEM. It was favorably reported to the House by voice vote as amended.
  • LIFT AI Act - H.R.9211. Introduced by Reps. Thomas Kean (R-NJ) and Gabe Amo (D-RI), this bill, formally known as the Literacy in Future Technologies Artificial Intelligence Act, would “improve educational efforts related to artificial intelligence literacy at the K through 12 level.” It was favorably reported to the House by voice vote as amended.
  • The Nucleic Acid Screening for Biosecurity Act - H.R.9194. Introduced by Reps. Yadira Caraveo (D-CO) and Richard McCormick (R-GA), this bill would “amend the Research and Development, Competition, and Innovation Act to support nucleic acid screening.” It was favorably reported to the House by voice vote.
  • Workforce for AI Trust Act - H.R.9215. Introduced by Rep. Frank Lucas (D-OK), this bill would “facilitate the growth of multidisciplinary and diverse teams that can advance the development and training of safe and trustworthy artificial intelligence systems, and for other purposes.” It was favorably reported to the House by voice vote.
  • The NSF AI Education Act of 2024 - H.R. 9402. Introduced by Rep. Andrea Salinas (D-OR), this bill would “support National Science Foundation education and professional development relating to artificial intelligence, and for other purposes.” It was favorably reported to the House by voice vote as amended.

The following two AI-related bipartisan bills advanced out of the House Committee on Science, Space, and Technology as part of a full committee markup on Sept. 25. The bills focused on addressing the US’ energy, research, and technology needs.

  • The AI Incident Reporting and Security Enhancement Act - H.R.9720. Introduced by Reps. Deborah Ross (D-NC), Jay Obernolte (R-CA), and Donald Beyer (D-VA), this bill would direct the National Institute of Standards and Technology (NIST) to “update the national vulnerability database to reflect vulnerabilities to artificial intelligence systems and study the need for voluntary reporting related to artificial intelligence security and safety incidents.” It was favorably reported to the House by voice vote.
  • The Department of Energy Artificial Intelligence Act of 2024 - H.R.9671. Introduced by Rep. Suzanne Bonamici (D-OR), this bill would “provide guidance for and investment in the research and development activities of artificial intelligence at the Department of Energy, and for other purposes.” It was favorably reported to the House by voice vote as amended.

The following bills were introduced across the House and Senate in September:

  • AI Grand Challenges Act of 2024 - H.R.9475. Introduced by Reps. Ted Lieu (D-CA) and Jay Obernolte (R-CA), the bill would “authorize the Director of the National Science Foundation to identify grand challenges and award competitive prizes for artificial intelligence research and development.” It is a companion bill to the Senate’s AI Grand Challenges Act of 2024 (S.4236), which was introduced in June by Sens. Cory Booker (D-NJ), Martin Heinrich (D-NM), and Mike Rounds (R-SD).
  • AI Incident Reporting and Security Enhancement Act - H.R.9720. Introduced by Reps. Deborah Ross (D-NC), Jay Obernolte (R-CA), and Don Beyer (D-VA), this bill would “direct the Director of the National Institute of Standards and Technology (NIST) to update the national vulnerability database to reflect vulnerabilities to artificial intelligence systems” and “study the need for voluntary reporting related to artificial intelligence security and safety incidents.”
  • Chip EQUIP Act - S.5002. Introduced by Sens. Mark Kelly (D-AZ) and Marsha Blackburn (R-TN), this bipartisan bill, also referred to as “the Chip Equipment Quality, Usefulness, and Integrity Protection Act of 2024,” would prohibit entities that receive federal funding for semiconductors from purchasing certain semiconductor manufacturing equipment from foreign entities of concern.
  • Consumers LEARN AI Act - H.R.9673. Introduced by Reps. Lisa Blunt Rochester (D-DE) and Marcus Molinaro (R-NY), this bill, formally known as the Consumer Literacy and Empowerment to Advance Responsible Navigation of Artificial Intelligence Act, would “direct the Secretary of Commerce to develop a national strategy regarding artificial intelligence consumer literacy and conduct a national artificial intelligence consumer literacy campaign.”
  • Federal Cyber Workforce Training Act of 2024 - H.R.9520. Introduced by Rep. Pat Fallon (R-TX), this bill would “require the National Cyber Director to submit to Congress a plan to establish an institute within the Federal Government to serve as a centralized resource and training center for Federal cyber workforce development.”
  • Modernizing Data Practices to Improve Government Act - S.5109. Introduced by Sens. Gary Peters (D-MI) and Todd Young (R-IN), this bill would extend the Chief Data Officer Council’s sunset and add new authorities for improving Federal agency data governance, which includes enabling reliable and secure adoption of emerging technologies and artificial intelligence.
  • PATHS Act - H.R.9459. Introduced by Reps. Michael Guest (R-MS) and Glenn Ivey (D-MD), this bill, formally known as the Producing Advanced Technologies for Homeland Security Act, would “amend the Homeland Security Act of 2002 to enable secure and trustworthy technology through other transaction contracting authority.”
  • Protecting Data at the Border Act - H.R.9567. Introduced by Reps. Ted Lieu (D-CA), Adriano Espaillat (D-NY), Eleanor Holmes Norton (D-DC), and Don Beyer (D-VA), this bill would “ensure the digital contents of electronic equipment and online accounts belonging to or in the possession of United States persons entering or exiting the United States are adequately protected at the border.”
  • PROTOCOL Act - H.R.9450. Introduced by Reps. August Pfluger (R-TX) and Debbie Dingell (D-MI), this bill, formally known as the Provide Rigorous Oversight To Optimize Connectivity and Offset Latency Act, would “amend the Infrastructure Investment and Jobs Act and the ACCESS BROADBAND Act to provide for improvements to the broadband Deployment Locations Map of the Federal Communications Commission and the broadband infrastructure funding database of the National Telecommunications and Information Administration.”
  • SHARE IT Act - H.R.9566. Introduced by Reps. Nick Langworthy (R-NY) and William Timmons (R-SC), this bill, also referred to as the Source Code Harmonization and Reuse in Information Technology Act, would broadly “require governmentwide source code sharing.”
  • Unleashing AI Innovation in Financial Services Act - H.R.9309. Introduced by Reps. French Hill (R-AR) and Ritchie Torres (D-NY), the bill would provide regulated financial entities with regulatory sandboxes that permit them to “experiment with artificial intelligence without expectation of enforcement actions.”
  • Workforce for AI Trust Act - H.R.9211. Introduced by Reps. Zoe Lofgren (D-CA) and Frank Lucas (R-OK), this bill would “facilitate the growth of multidisciplinary and diverse teams that can advance the development and training of safe and trustworthy artificial intelligence systems.
  • Workforce of the Future Act of 2024 - S.5031. Introduced by Sens. Laphonza Butler (D-CA) and Mazie Hirono (D-HI), this bill would authorize the Secretary of Education to create a program that increases access to emerging and advanced technology in prekindergarten through grade 12 education in order to “promote a 21st century artificial intelligence workforce.”
  • Workforce of the Future Act of 2024 - H.R.9607. Introduced by Reps. Barbara Lee (D-CA) and Emanuel Cleaver (D-MO), this companion bill “would identify how artificial intelligence (AI) and emerging technologies will change the workforce of the future and provide workers, teachers, and our nation's students with the resources to develop integral skills required to participate in that workforce. ”
  • To direct the Department of Defense… - H.R.9626. Introduced by Rep. Ro Khanna (D-CA), this bill would “direct the Department of Defense to develop a plan for establishing a secure computing and data storage environment for the testing of AI trained on biological data.”
  • To improve the tracking… - H.R.9737. Introduced by Reps. Deborah Ross (D-NC) and Don Beyer (D-A), this bill would “improve the tracking and processing of security and safety incidents and risks associated with artificial intelligence.”
  • To amend the Federal Election Campaign Act… - H.R.9639. Introduced by Rep. Adam Schiff (D-CA), this bill would “amend the Federal Election Campaign Act of 1971 to clarify that the prohibition under such Act against the fraudulent misrepresentation of campaign authority and the fraudulent solicitation of funds includes misrepresentation through the use of content generated in whole or in part with the use of artificial intelligence (generative AI), and for other purposes.”
  • Recognizing access to water… - H.Res.1478. Introduced by Rep. Rashida Tlaib (D-MI), this resolution would recognize “access to water, sanitation, electricity, heating, cooling, broadband communications, and public transportation as basic human rights and public services that must be accessible, safe, justly sourced and sustainable, acceptable, sufficient, affordable, climate resilient, and reliable for every person.”
  • A resolution countering disinformation… - H.Res.1471. Introduced by Rep. Joaquin Castro (D-TX), this resolution calls for “countering disinformation, propaganda, and misinformation in Latin America and the Caribbean, and calling for multi-stakeholder efforts to address the significant detrimental effects that the rise in disinformation, propaganda, and misinformation in regional information environments has on democratic governance, human rights, and United States national interests.” A companion resolution was introduced in the Senate ( S.Res.833) by Sen. Ben Ray Lujan (D-NM).

We welcome feedback on how this roundup could be most helpful in your work – please contact contributions@techpolicy.press with your thoughts.

Authors

Rachel Lau
Rachel Lau is a Senior Associate at Freedman Consulting, LLC, where she assists project teams with research, strategic planning, and communications efforts. Her projects cover a range of issue areas, including technology policy, criminal justice reform, economic development, and diversity and equity...
J.J. Tolentino
J.J. Tolentino is a Senior Associate at Freedman Consulting, LLC where he assists project teams with research, strategic planning, and communication efforts. His work covers issues including technology policy, social and economic justice, and youth development.
Gabby Miller
Gabby Miller is a staff writer at Tech Policy Press. She was previously a senior reporting fellow at the Tow Center for Digital Journalism, where she used investigative techniques to uncover the ways Big Tech companies invested in the news industry to advance their own policy interests. She’s an alu...
Ben Lennett
Ben Lennett is managing editor for Tech Policy Press and a writer and researcher focused on understanding the impact of social media and digital platforms on democracy. He has worked in various research and advocacy roles for the past decade, including as the policy director for the Open Technology ...

Topics