Home

June 2024 US Tech Policy Roundup

Rachel Lau, J.J. Tolentino, Ben Lennett / Jun 28, 2024

Rachel Lau and J.J. Tolentino work with leading public interest foundations and nonprofits on technology policy issues at Freedman Consulting, LLC. Ben Lennett is Managing Editor of Tech Policy Press.

The US Supreme Court. Shutterstock

As the focus of US policymakers has increasingly shifted to the 2024 election, progress on legislation and other federal policy was limited in June 2024. Still there were some important developments in the courts, as well as a surprising setback to federal privacy legislation. Here are some highlights:

  • The US Supreme Court ruled 6-3 in favor of the Biden Administration in Murthy v. Missouri, finding that the plaintiffs lacked standing to bring a case alleging First Amendment violations as a result of efforts to combat COVID-19 misinformation on social media.
  • A revised version of the American Privacy Rights Act (APRA) faced significant opposition and was pulled from a House Energy and Commerce Committee markup session scheduled for June 27.
  • AI policy continues to be a focus for federal agencies and policymakers. The Treasury Department published a request for information on AI tools in the financial sector and multiple AI-related bills were introduced in Congress, including legislation on AI and non-consensual intimate imagery, transparency in elections, and establishing Chief AI Officer positions at federal agencies. The State of California also continued to debate a suite of bills to regulate AI.

Read on to learn more about June developments in US tech policy.

Supreme Court Finds in Favor of Biden Administration in Alleged Jawboning Case

  • Summary: The US Supreme Court ruled in favor of the Biden Administration in Murthy v. Missouri. In a 6-3 ruling, the court reversed a decision by the Court of Appeals for the Fifth Circuit that had found that the administration had violated the plaintiffs’ First Amendment rights, finding instead that the plaintiffs did not have standing to bring the case. At issue in Murthy was the Biden administration’s efforts to limit the spread of COVID-19 misinformation on social media platforms. In May 2022, Missouri, Louisiana, and several individual social media users filed a lawsuit, alleging that the administration's communication with social media platforms was overt government strong-arming, or jawboning. A district court ruled in favor of the states, issuing a partial preliminary injunction, and the Fifth Circuit of Appeals largely upheld the injunction. However, the Supreme Court found that “Neither the individual nor the state plaintiffs established Article III standing to seek an injunction against any defendant."
  • Stakeholder Response: Public interest and free speech groups were generally pleased with the decision. As Meetali Jain, executive director of the Tech Justice Law Project, offered: “Today, the Court preserved the ability of government and researchers to ensure Americans receive true and accurate information from social media platforms on subjects ranging from disaster preparedness to foreign interference with elections.” NetChoice, an industry association representing tech companies, including Meta and X (Twitter), was also supportive of the Court’s decision. “The Court’s decision in Murthy underscores the importance of protecting online services’ First Amendment right to editorial judgment,” said Carl Szabo, NetChoice vice president and general counsel, in a statement. NetChoice is awaiting a decision from the court in two cases concerning the constitutionality of social media laws passed in Florida and Texas.
  • Other groups, such as the Knight First Amendment Institute, agreed with the court’s decision to reverse the Fifth Circuit, but would have liked the court to “clarify the line between permissible attempts to persuade and impermissible attempts to coerce.” “This guidance would have been especially valuable in the months leading up to the election,” noted the group’s statement.
  • The plaintiffs in the case, along with right-wing officials and groups, were opposed to the court’s decision. The State Attorney General (AG) for Louisiana, Liz Murrill, argued that “a majority of the Supreme Court gives a free pass to the federal government to threaten tech platforms into censorship and suppression of speech that is indisputably protected by the First Amendment,” in a statement on social media. The State AG for Missouri, Andrew Bailey, pledged to continue the case: “Missouri is not done. We are going back to the district court to obtain more discovery in order to root out Joe Biden’s vast censorship enterprise once and for all.” Similarly, House Judiciary Chairman Rep. Jim Jordan (R-OH), who has used the Judiciary Committee and the Select Subcommittee on the Weaponization of the Federal Government to investigate academic researchers and institutions focused on disinformation, vowed to continue the work. “While we respectfully disagree with the Court's decision, our investigation has shown the need for legislative reforms, such as the Censorship Accountability Act, to better protect Americans harmed by the unconstitutional censorship-industrial complex. Our important work will continue."
  • What We’re Reading: Tech Policy Press reported on the court’s decision, and collected a range of responses both in favor and opposed to the Court’s decision from civil society groups, legal experts, lawmakers, and the plaintiffs. An article by former Department of Justice official Andrew Weissmann published in Just Security examines what Murthy v. Missouri did and did not say.

APRA Markup Canceled, Leaving the Bill’s Future Uncertain

  • Summary: A revised version of the American Privacy Rights Act (APRA) was among the bills that were pulled from congressional consideration when the House Energy and Commerce Committee abruptly canceled its markup session on July 27. The decision to cancel the hearing arose amid news that House Republican leaders vowed to “scuttle the bill whether it was approved by the committee or not.” APRA also faced significant public scrutiny over revisions that were made to the bill earlier this month. Committee leaders Reps. Cathy McMorris Rodgers (R-WA) and Frank Pallone (D-NJ) circulated the first discussion draft of APRA in April 2024 to establish a comprehensive national standard for data privacy. The bill has since gone through multiple rounds of changes prior to this month’s final draft. Among the major revisions in this month’s draft included the removal of the “civil rights and algorithms” provision, which was designed to establish safeguards against algorithmic decision-making bias. Other relevant updates included the addition of a new section revising preemption relating to laws or rules designed to protect children and teens, limiting preemption to situations when their is clear conflict with APRA standards while allowing states to provide greater protections, as well as the incorporation of the “actual knowledge” standard from the Children and Teens’ Online Privacy Protection Act (COPPA 2.0), which was missing in the previous version of APRA. The future of the bill is increasingly in doubt after being pulled from the Committee hearing
  • Stakeholder Response: APRA faced strong opposition from civil society, industry, and other relevant stakeholders ahead of the markup session’s cancellation. In response to the new APRA draft, more than 50 civil society groups including the Leadership Conference on Civil and Human Rights, the Lawyer’s Committee for Civil Rights Under Law, and the American Civil Liberties Union, sent a letter to House Energy and Commerce Committee leadership urging them to postpone APRA’s committee markup and stall the bill’s progress unless the the civil rights provisions were restored and sufficient stakeholder consultation occurred. The letter condemned the revisions, stating that the deletion of the civil rights provisions was an “unacceptable change to the bill and its scope,” and further noted that the removal of such provisions occurred without “prior stakeholder consultation and without studying the impact of the bill’s ability to address data-driven discrimination in housing, employment, credit, education, health care, insurance, and other economic opportunities.”
  • United for Privacy, a coalition of industry organizations led by TechNet, sent a letter to Reps. McMorris Rodgers and Pallone encouraging them to modify APRA to ensure the law would preempt state regulations. The letter noted that “without full preemption of state laws, APRA will add to the privacy patchwork, create confusion for consumers and hinder economic growth.” Privacy for America, a group of companies and trade associations that include members from across industries including advertising, travel, hospitality, media, and financial services, among others, sent a letter to Reps. McMorris Rodgers and Pallone in strong opposition to APRA, arguing that the bill would prevent everyday engagement between businesses and consumers which would stifle the economy as well as leading to severe damage to “small, mid-size, and start-up businesses that rely on data-driven advertising.” With the cancellation of APRA’s markup, there is rising uncertainty on whether the bill will receive a House floor vote in this legislative session.
  • What We’re Reading: The National Law Review provided an overview of the notable revisions made in ARPA’s discussion draft. Axios Pro released a tech policy legislative tracker that aims to keep tabs on the most consequential tech bills working through Congress. Tech Policy Press also published an op-ed from UnidosUS calling on Congress to restore fairness protections in the APRA and an analysis by privacy expert Joe Jerome of ten different and potentially conflicting goals for privacy legislation in the US.

Tech TidBits & Bytes

Tech TidBits & Bytes aims to provide short updates on tech policy happenings across the executive branch and agencies, Congress, civil society, industry, international governance, and courts.

In the executive branch and agencies:

  • The Treasury Department published a request for information on emerging and existing AI tools in the financial sector, specifically on “the use of AI in financial products and services, risk management, capital markets, internal operations, customer service, marketing and regulatory compliance.”
  • The Department of Homeland Security (DHS) announced that it hired the first ten experts in the “AI Corps” as part of its hiring sprint to recruit 50 AI technology experts in 2024. The AI Corps will be responsible for exploring opportunities to leverage AI responsibly and safely across DHS’ strategic areas, including efforts to combat child sexual exploitation and abuse, fortify critical infrastructure, and enhance cybersecurity, among others.
  • Customs and Border Protection (CBP) agents have begun utilizing AI to track “precursor chemicals” along the US-Mexico border in an effort to combat fentanyl production.
  • The US Digital Service is exploring the use of AI to improve the government’s technology use and service delivery.
  • The National Institute of Standards and Technology (NIST), the Digital Benefits Network (DBN) at the Beeck Center for Social Impact + Innovation at Georgetown University, and the Center for Democracy and Technology (CDT) announced a new two-year collaboration to develop voluntary resources on identity proofing for online applications.

In Congress:

  • The New York Times reported that US lawmakers were targeted by an influence campaign by Israel’s Ministry of Diaspora Affairs to promote pro-Israel sentiments. The campaign began in October 2023 and, at its peak, included hundreds of fake social media accounts and three fake news sites posting AI-generated content.
  • The House Committee on Small Business hosted a hearing titled “Under the Microscope: Examining the Censorship-Industrial Complex and its Impact on American Small Businesses.” In his opening statement, Committee Chairman Roger Williams (R-TX) claimed that government and third party efforts to stop misinformation are “making it harder for conservative businesses to succeed online.”

In civil society:

  • At the end of May, the ACLU requested that the Federal Trade Commission (FTC) investigate three AI products developed by Aon Consulting due to alleged discrimination, including a personality assessment test, a video interview tool, and a cognitive ability assessment screening device.
  • The Electronic Privacy Information Center (EPIC) launched an AI Legislation Scorecard that provides a rubric for evaluating the strength of AI bills against provisions like “data minimization requirements, impact assessment and testing obligations, prohibitions on particularly harmful AI uses, and robust enforcement mechanisms.”
  • At the first annual “DC Privacy Forum: AI Forward,” the Future of Privacy Forum (FPF) announced that it will be launching the FPF Center for AI to support AI advancement by “establishing best practices, research, legislative tracking and other resources,” and acting as a source of information and resources for lawmakers and relevant stakeholders.
  • Over 50 civil society organizations signed a letter to congressional leadership urging them to grant a floor vote to three bills aimed at protecting federal elections from deceptive content made with generative AI.
  • Sixteen civil society organizations published a letter encouraging President Biden to renominate Sharon Bradford Franklin to Chair of the Privacy and Civil Liberties Oversight Board (PCLOB), which is currently playing a key role in the development of the EU-US Data Privacy Framework.
  • The American Economic Liberties Project, the Demand Progress Education Fund, and the Revolving Door Project sent letters to the National Telecommunications and Information Administration (NTIA) and Department of Justice (DOJ) “urging them to end VeriSign Inc.’s government-designated monopoly over domain registration.”
  • Researchers at the University of Texas at Austin’s Center for Media Engagement at the Moody College of Communication published a report on what political professionals think about generative AI in US elections. They found a wide range of perspectives on whether AI should be used in the election context. They also found that AI’s content creation abilities were highly publicized, but that there was also a lot of potential for the technology to democratize information access. Dean Jackson and Zelly Martin published an op-ed in Tech Policy Press on the report’s findings.
  • Chamber of Progress launched a new project, Generate & Create, which promotes AI use in art through legal advocacy, policy advocacy, and media.
  • Mijente and Just Futures Law released a report on the Department of Homeland Security’s use of AI to make immigration decisions.

In industry:

  • Telecom industry actors filed at least four suits against the Federal Communication Commission’s (FCC) net neutrality rules in June, including the National Cable & Telecommunications Association (NCTA) and the Texas Cable Association in the Fifth Circuit Court of Appeals, USTelecom and the Ohio Telecom Association in the Sixth Circuit, the Cellular Telephone Industries Association (CTIA) in the D.C. Circuit, and the Missouri Internet & Television Association in the Eighth Circuit.
  • 404 Media reported that it discovered an internal Google database listing privacy and security violations. The database included thousands of privacy incidents reported by Google employees, such as events where Google recorded audio clips of children's voices, collected data on trips and home addresses, maintained records of deleted watch history, and mismanaged other sensitive user data.

In international governance:

  • At the 50th G7 Summit in Italy, leaders agreed to begin developing an action plan on coordination towards a “shared understanding of risk management and advance international standards for AI development and deployment.” In a special AI ethics session with G7 leaders, Pope Francis voiced similar concerns, expressing a need for oversight to protect people from potential AI harms.

In the courts:

  • The Department of Justice antitrust case against Google’s monopoly over digital advertising technologies moved forward without a jury.
  • Reuters filed its opening brief in the AI copyright suit, Thomson Reuters v. ROSS, as it heads to a jury trial.

Other Legislation Updates

The following bills made progress in June:

  • The state legislature in California debated a suite of bills to regulate artificial intelligence, including amending 11 AI bills this month.

The following bills were introduced across the House and Senate in June:

  • TAKE IT DOWN Act (S.4569, introduced by Sen. Ted Cruz (R-TX) and co-sponsored by Sens. Amy Klobuchar (D-MN), Cynthia Lummis (R-WY), Richard Blumenthal (D-CT), Shelley Moore Capito (R-WV) Jacky Rosen (D-NV), Ted Budd (R-NC), Laphonza Butler (D-CA), Todd Young (R-IN), Joe Manchin (I-WV), John Hickenlooper (D-CO), Bill Cassidy (R-LA) and Martin Heinrich (D-NM)): This bill “would criminalize the publication of non-consensual intimate imagery (NCII), including AI-generated NCII (or ‘deepfake pornography’), and require social media and similar websites to have in place procedures to remove such content upon notification from a victim.”
  • Small Business AI Training and Toolkit Act of 2024 (S.4487, introduced by Sen. Maria Cantwell (D-WA) and Sen. Jerry Moran (R-KS)): This bill “would authorize the U.S. Department of Commerce to work with the Small Business Administration to create and distribute artificial intelligence training resources and tools to help small businesses leverage AI in their operations.”
  • Ending FCC Meddling in Our Elections Act (S. 4594, sponsored by Sen. Mike Lee (R-UT)): This bill would “prohibit the Federal Communications Commission from promulgating or enforcing rules regarding disclosure of AI-generated content in political advertisements.”
  • AI Transparency in Elections Act (H.R. 8868, introduced by Rep. Joseph Morelle (D-NY)): This bill would amend the Federal Election Campaign Act of 1971 to “require political ads created or altered by AI to have a disclaimer, except when AI is used for only minor alterations, such as color editing, cropping, resizing, and other immaterial uses. The bill also requires the Federal Election Commission to address violations of the legislation quickly.”
  • Chip Equipment Quality, Usefulness, and Integrity Protection Act of 2024 (Chip EQUIP Act) (H.R. 8826,S.4585, introduced by Reps. Zoe Lofgren (D-CA) and Frank Lucas (R-OK) with Sens. Mark Kelly (D-AZ) and Marsha Blackburn (R-TN)): This bill would limit the recipients of funding from the CHIPS Act from purchasing semiconductor components from China.
  • Social Media and AI Resilience Toolkits in Schools Act (SMART in Schools Act) (S.4614, introduced by Sen. Ed Markey (D-MA)): This bill would instruct the Department of Education and the Department of Health and Human Services to develop “resource toolkits on the impact of artificial intelligence (AI) and social media” on student mental health.
  • Artificial Intelligence Public Awareness and Education Campaign Act (S.4596, introduced by Sens. Todd Young (R-IN) and Brian Schatz (D-HI)): This bill would “require the Secretary of Commerce to carry out a public awareness and education campaign to provide information regarding the benefits of, risks relating to, and the prevalence of AI in the daily lives of individuals in the United States.”
  • AI Leadership to Enable Accountable Deployment (AI LEAD) Act (H.R. 8756, introduced by Rep. Gerry Connolly (D-VA) and Sens. Gary Peters (D-MI) and John Cornyn (R-TX)): This bill would establish “a Chief AI Officer position at every federal agency” and create “an interagency council composed of those officers.”
  • International Artificial Intelligence Research Partnership Act of 2024 (H.R. 8700, introduced by Rep. Norma Torres (D-CA)): This bill would “offer guidance and assistance to cities in the United States interested in establishing international partnerships for artificial intelligence research and resources” and “promote and facilitate the development of collaborative artificial intelligence research initiatives between U.S. cities and their global counterparts.”
  • Promoting Responsible Evaluation and Procurement to Advance Readiness for Enterprise-wide Deployment (PREPARED) for AI Act (S.4495, introduced by Sen. Gary Peters (D-MI) and Sen. Thom Tillis (R-NC)): This bill would require “agencies to assess and address the risks of their AI uses prior to buying and deploying the technology” and ensure “the federal government reaps the benefits of this technology through the creation of pilot programs to test more flexible, competitive purchasing practices.”
  • Preventing Algorithmic Facilitation of Rental Housing Cartels Act (H.R. 8622, introduced by Rep. Becca Balint (D-VT) and Rep. Jesús García (D-IL)): This bill would “prohibit digital price fixing by landlords” who may be “using algorithms to collude to further increase rents for working families.”

We welcome feedback on how this roundup could be most helpful in your work – please contact Alex Hart or contributions@techpolicy.press with your thoughts.

Authors

Rachel Lau
Rachel Lau is a Senior Associate at Freedman Consulting, LLC, where she assists project teams with research, strategic planning, and communications efforts. Her projects cover a range of issue areas, including technology policy, criminal justice reform, economic development, and diversity and equity...
J.J. Tolentino
J.J. Tolentino is a Senior Associate at Freedman Consulting, LLC where he assists project teams with research, strategic planning, and communication efforts. His work covers issues including technology policy, social and economic justice, and youth development.
Ben Lennett
Ben Lennett is managing editor for Tech Policy Press and a writer and researcher focused on understanding the impact of social media and digital platforms on democracy. He has worked in various research and advocacy roles for the past decade, including as the policy director for the Open Technology ...

Topics