July 2024 US Tech Policy Roundup
Rachel Lau, J.J. Tolentino, Gabby Miller, Ben Lennett / Aug 1, 2024Rachel Lau and J.J. Tolentino work with leading public interest foundations and nonprofits on technology policy issues at Freedman Consulting, LLC. Ben Lennett is managing editor and Gabby Miller is staff writer at Tech Policy Press.
After a slow month in June, there were significant developments in tech policy this month, particularly in the courts and the legislative branches. Here are some highlights:
- The US Supreme Court made important rulings, including vacating and remanding two cases (Moody v. NetChoice, LLC and NetChoice, LLC v. Paxton) involving state laws in Florida and Texas seeking to regulate social media content moderation. The Court also reversed the longstanding Chevron doctrine in Loper Bright Enterprises v. Raimondo, reshaping how federal agencies interpret their authority.
- The Senate passed major online child safety legislation, with a new bill, the Kids Online Safety and Privacy Act (KOSPA), that combines the Kids Online Safety Act (KOSA) and the Children's Online Privacy Protection Act (COPPA) 2.0. It also passed the DEFIANCE Act of 2024, which provides a federal civil remedy for victims of deepfakes that depict sexual imagery. The Senate Commerce Committee also advanced several AI-related bills that address AI research, education, and standards.
- The White House announced nearly $100 million in investments into public interest technology from government, academia, and civil society. The White House’s Kids Online Health and Safety Task Force also released recommendations and best practices on how youth can use social media and online platforms more safely.
Read on to learn more about July developments in US tech policy.
The Supreme Court sends NetChoice cases back to lower courts, but reshapes tech policy with Loper ruling
- Summary: The US Supreme Court ruled unanimously to vacate and remand two important cases regarding state laws impacting social media content moderation. The cases, Moody v. NetChoice, LLC and NetChoice, LLC v. Paxton, revolve around laws in Florida and Texas that would restrict social media companies’ ability to moderate content on their platforms. The laws were written broadly and cover more than just social media platforms like Facebook and YouTube, leading the court to send the cases back to the lower courts and instructing them to assess “the state laws’ scope” and “decide which of the laws’ applications violate the First Amendment, and to measure them against the rest.”
- In another opinion, Loper Bright Enterprises v. Raimondo and Relentless, Inc. v. Department of Commerce, the court reversed a longstanding principle of administrative law called the Chevron doctrine. Under Chevron, courts generally deferred to federal agencies to interpret their authority under relevant laws to develop regulations and rules. Now with the decision, the courts will decide all relevant questions of law and apply their own judgment on a case-by-case basis. This will open up more challenges to existing and future regulations and may restrict the ability of agencies to update regulations without going back to Congress.
- Stakeholder Response: Public interest groups were generally pleased with the NetChoice decisions, even as they stressed the need for platforms to do more to address content harms. Nora Benavidez at Free Press offered that: “While Free Press believes that tech companies should bolster their platform-accountability measures across the board, the First Amendment is clear: The government does not have the right to impose rules on how companies like Meta and Google should accomplish this.” Free speech groups also agreed with the decision. Jameel Jaffer of the Knight First Amendment Institute declared, “This is a careful and considered ruling that decisively rejects the broadest arguments made by the states and the social media platforms. It properly recognizes that platforms are ‘editors’ under the First Amendment, but it also dismisses, for good reasons, the argument that regulation in this sphere is categorically unconstitutional.”
- In contrast, many public interest and civil rights groups expressed concern about the Loper decision. Harold Feld at Public Knowledge offered that “Today’s opinion is the latest in what Professor Mark Lemely dubbed “the Imperial Supreme Court” – a Court intent on elevating itself over the other two branches of government as the ultimate decider of policy rather than an interpreter of law.” Maya Wiley of The Leadership Conference on Civil and Human Rights offered that “The civil rights community is outraged that the extremist majority of the U.S. Supreme Court has once again demonstrated it will place the interests of the rich and powerful over decades of settled law and the protection of our civil and human rights.”
- Industry groups, who had challenged both the Texas and Florida social media laws, were happy with the court’s decision. Chris Marchese of the NetChoice Litigation Center called the decision from the Supreme Court “a victory for First Amendment rights online.” Similarly, Matt Schruers of the Computer & Communications Industry Association (CCIA) was “encouraged that a majority of the Court has made clear that the government cannot tilt public debate in its favored direction. There is nothing more Orwellian than government attempting to dictate what speech should be carried, whether it is a newspaper or a social media site.”
- The defendants’ reaction to the court’s NetChoice decision was mixed. Florida’s Attorney General was “pleased that SCOTUS agreed with Florida and rejected the lower court’s flawed reasoning...” The Eleventh Circuit had blocked Florida’s social media law on constitutional grounds, but the Supreme Court now sends it back to the lower court to review again. In contrast, the Fifth Circuit found the Texas law to be constitutional, but per the Supreme Court’s decision, it must now take another look. Ken Paxton, Attorney General, Texas vowed “to keep fighting for our law that protects Texans’ voices.”
- What We’re Reading: Tech Policy Press reported on the court’s NetChoice decision and collected a range of responses both in favor and opposed to the Court’s decision from civil society groups, legal experts, lawmakers, and the plaintiffs. Vera Eidelman of the ACLU wrote in Tech Policy Press that the court’s NetChoice ruling makes clear that the government should not aim to suppress the freedom of speech online. In a series of articles for Tech Policy Press, public interest advocates and other experts offered different takes on the Supreme Court’s decisions to overturn the Chevron doctrine. Free Press policy counsel Yanni Chen said that the Supreme Court decision in Loper is a damaging blow to federal agencies’ processes to protect the public. Mark MacCarthy, a professor at Georgetown University, said that there is still policy space for agencies to craft measures that protect the public interest. Ariel Fox Johnson wrote that the reality of the Loper decision is nuanced for privacy, and youth privacy in particular.
The Senate Passes Major Online Child Safety Legislation, but its Fate Remains Uncertain in the House
- Summary: This month, the Senate passed the Kids Online Safety Act (KOSA) and the Children’s Online Privacy Protection Act (COPPA) 2.0 in a combined bill called the Kids Online Safety and Privacy Act (KOSPA), representing “the most significant restrictions on tech platforms to clear a chamber of Congress in decades,” according to the Washington Post. KOSA (S.1409), sponsored by Sens. Richard Blumenthal (D-CT) and Marsha Blackburn (R-TN), would establish a duty of care standard for social media companies that operate platforms utilized by minors to “mitigate the promotion of harmful content and addictive features.” Meanwhile, COPPA 2.0 (S.1418), led by Sens. Ed Markey (D-MA) and Bill Cassidy (R-LA), would limit data collection on minors up to 16 years old and prohibit targeted advertising to children online. On July 25, the combined bill passed a Senate cloture vote 86 to 1 as part of a larger legislative vehicle. On July 30, the bill received overwhelming support on the Senate floor, passing by a 91-3 vote.
- KOSA and COPPA 2.0 will now head to the House, where the bills face significant uncertainty. The bills were expected to receive markups in the House Energy and Commerce Committee late last month, but the meeting was abruptly canceled amid infighting between Republican leadership. The House did not reschedule these markups before breaking for its annual August recess. Despite this development, House Speaker Mike Johnson (R-LA) made optimistic comments in support of the bills.
- Stakeholder Response: KOSA, in particular, was the source of controversy among industry leaders, lobby groups, and other civil society organizations. Earlier this year, KOSA received broad industry support from Snap, Microsoft, Pinterest, and X after lawmakers amended the bill to remove “state attorneys general enforcement.” In opposition to KOSA, NetChoice urged Senators to vote no on the bill citing cybersecurity, data privacy, and constitutional issues and stating that “Congress has good intentions in wanting to address online issues, especially concerning children.” Similarly, the American Civil Liberties Union and more than 300 high school students in opposition to KOSA met with lawmakers to urge them to oppose the legislation, suggesting that the bill would enable censorship and endanger young people’s “access to important resources, like gender identity support, mental health materials, and reproductive healthcare.”
- What We’re Reading: Tech Policy Press published a piece by Vicki Harrison, program director for the Center for Youth Mental Health and Wellbeing in the Stanford Department of Psychiatry and Behavioral Science, and Anne Collier, founder and executive director of the nonprofit Net Safety Collaborative, discussing how young people should have a voice in shaping online child safety and privacy legislation. They discussed how KOSA could impact the ability of LGBTQ+ youth to access online resources, information, and communities. Politico examined how a coalition of advocacy groups, including Common Sense Media, ParentsTogether, Fairplay, and the Tech Oversight Project, among others, have successfully supported child online safety legislation at the state level and seek to replicate that success with federal lawmakers.
Tech TidBits & Bytes
Tech TidBits & Bytes aims to provide short updates on tech policy happenings across the executive branch and agencies, Congress, civil society, industry, international governance, and courts.
In the executive branch and agencies:
- The White House announced nearly $100 million in investments into public interest technology from government, academia, and civil society. This support included at least $48 million from the National Science Foundation and $20 million each from the Ford Foundation and Siegel Family Endowment.
- The Department of Education released a guide with recommendations on using AI in education, including guidelines on design, impact and rationale, equity and civil rights, safety and security, and transparency.
- The Federal Trade Commission issued orders requiring eight companies to turn over information about the impact of their surveillance pricing products and services on privacy, competition, and consumer protection.
- The Kids Online Health and Safety Task Force released recommendations and best practices on how youth can use social media and online platforms more safely, including resources for parents and caregivers, industry recommendations, a research agenda, and next steps.
- The Federal Communications Commission (FCC) published a proposal that TV and radio political ads should disclose when AI has been used.
- The National Institute of Science and Technology (NIST) released a public draft of new guidelines for developers of dual-use foundation models to manage risk. They also released Dioptra, a publicly available testing platform for evaluating the defensive capabilities of AI tools. Finally, NIST released a companion to NIST’s previously released AI risk management framework and a plan for US-international cooperation for AI standard-setting.
- The National Telecommunications and Information Administration (NTIA) published a report on dual-use foundation models with public model weights, arguing for a “cautious yet optimistic path” that supports the benefit of publicly available AI models with widely available model weights while continuing to monitor for risks and harms.
In Congress:
- Reps. Cathy McMorris Rodgers (R-WA), Morgan Griffith (R-VA), and Robert Latta (R-OH) sent a letter to the National Telecommunications and Information Administration (NTIA) requesting that all communications between the NTIA and state broadband offices about the Broadband Equity, Access, and Deployment (BEAD) program are sent to the Committee on Energy and Commerce for review.
In civil society:
- The Center for Democracy and Technology (CDT) published a report on the importance of disability data to mitigate the discriminatory impacts of technology and algorithms. They also published recommendations on the use of generative AI in elections for generative AI developers.
- CDT and 14 other civil society organizations and individuals signed a letter calling on the Biden Administration to include a baseline set of requirements in the upcoming National Security Memorandum on AI that protects civil rights and civil liberties.
- The Artificial Intelligence Policy Institute (AIPI) released a poll conducted by YouGov on public perception of AI and AI risk. They found that 86 percent of voters believe that “AI could accidentally cause a catastrophic event” and that 82 percent of voters “don’t trust tech executives to regulate AI.”
- The Data Provenance Institute published a paper on the “consent protocols for the web domains underlying AI training corpora,” conducting a large-scale, longitudinal audit of 14,000 web domains to understand how data consent is changing.
In industry:
- The Information Technology Industry Council published their new AI Accountability Framework, which outlined a structure for “the responsible development and deployment of AI systems.”
- Anthropic released a new program funding the development of third-party model evaluations, prioritizing AI safety assessments, advanced capability and safety metrics, and infrastructure for developing evaluations.
- Microsoft withdrew from its observer role on OpenAI’s board, and Apple backed out of an upcoming, similar role as EU and US regulators look into antitrust and competition concerns in the AI space.
- Apple released a round of threat notifications to iPhone users in 98 countries who may have been targeted by mercenary spyware.
- Microsoft released a white paper on the potential abusive uses of generative AI, Microsoft’s approach to combating those abuses, and policy recommendations for protecting the public.
In the courts:
- The US Ninth Circuit Court of Appeals heard oral arguments for NetChoice v. Bonta in. The suit challenges the constitutionality of the California Age Appropriate Design Code (CAADCA), which takes a ‘safety by design’ approach to child online safety legislation.
- The US Ninth Circuit Court of Appeals heard oral arguments for X Corp. v. Bonta in. X, formerly known as Twitter, is challenging the constitutionality of California’s AB 587, which sought to compel social media platforms to submit reports to the state disclosing their terms of service and content moderation policies.
- On July 15, Meta filed a motion to dismiss the Zuckerman v. Meta Platforms in the California Northern District Court. The case involves professor Ethan Zuckerman and asks the district court to acknowledge Section 230 as a tool “that empower[s] people to control what they see on social media,” according to the Knight First Amendment Institute, which filed the suit on behalf of the plaintiff.
- Meta agreed to a $1.4 billion settlement with the state of Texas over allegations that the company violated state privacy laws by collecting millions of users’ biometric data without consent.
Other Legislation Updates
The following bills made progress in July:
- The US Senate passed the DEFIANCE Act of 2024 (S.3696) by unanimous consent. The bill creates a “federal civil remedy for victims who are identifiable in a ‘digital forgery’... that depicts the victim in the nude, or engaged in sexually explicit conduct or sexual scenarios.”
- The Senate Committee on Commerce, Science, and Transportation advanced numerous AI-related bills during a full committee Executive Session on July 31, 2024, including the:
- Future of AI Innovation Act (S.4178) to “establish artificial intelligence standards, metrics, and evaluation tools, to support artificial intelligence research”
- TAKE IT DOWN Act (S.4569) to “require covered platforms to remove non-consensual intimate visual depictions,” 3) CREATE AI Act of 2023 (S.2714) to “establish the National Artificial Intelligence Research Resource”
- NSF AI Education Act of 2024 (S.4394) to “support National Science Foundation education and professional development relating to artificial intelligence”
- VET Artificial Intelligence Act (S.4769) to “require the Director of the National Institute of Standards and Technology to develop voluntary guidelines and specifications”
- TEST AI Act of 2023 (S.3162) to “improve the requirement for the Director of the National Institute of Standards and Technology to establish testbeds”
- Promoting United States Leadership in Standards Act of 2024 (S.3849) to “promote United States leadership in technical standards”
- Artificial Intelligence Public Awareness and Education Campaign Act (S.4596) to “require the Secretary of Commerce to conduct a public awareness and education campaign to provide information regarding the benefits of, risks relating to, and the prevalence of artificial intelligence”
- Artificial Intelligence Research, Innovation, and Accountability Act of 2023 (S.3312) to “provide a framework for artificial intelligence innovation and accountability, and for other purposes.”
- The Senate Commerce Committee also voted out an amendment to the PLAN for Broadband Act (S.2238) that would direct $7 billion in funding to the recently lapsed Affordable Connectivity Program.
- The Homeland Security and Governmental Affairs Committee advanced the PREPARED for AI Act (S.4495) to “enable safe, responsible, and agile procurement, development, and use of artificial intelligence by the Federal Government.”
- A unanimous consent vote on Protecting Elections from Deceptive AI Act (S.2770) and AI Transparency in Elections Act (S.3875) failed in the Senate.
The following bills were introduced across the House and Senate in July:
- Content Origin Protection and Integrity from Edited and Deepfaked (COPIED) Media Act of 2024 (S.4674, introduced by Sens. Maria Cantwell (D-WA), Marsha Blackburn (R-TN), and Martin Heinrich (D-NM).The bill would “require transparency with respect to content and content provenance information, to protect artistic content.”
- Fraudulent Artificial Intelligence Regulations (FAIR) Elections Act of 2024 (S.4714, introduced by Sen. Jeff Merkley (D-OR) and Alex Padilla (D-CA) and co-sponsored by Sen. Mazie Hirono (D-HI), Sheldon Whitehouse (D-RI), and Peter Welch (D-VT)):This bill would “prohibit the distribution of false AI-generated election media and to amend the National Voter Registration Act of 1993 to prohibit the removal of names from voting rolls using unverified voter challenge databases.”
- Algorithmic Justice and Online Platform Transparency Act (H.R.4624,S.2325, reintroduced by Rep. Doris Matsui (D-CA) with Rep. Kweisi Mfume (D-MD) and Sen. Ed Markey (D-MA) with Sens. Sheldon Whitehouse (D-RI) and Elizabeth Warren (D-MA)): The House bill establishes “requirements for certain commercial online platforms (e.g., social media sites) that withhold or promote content through algorithms and related computational processes that use personal information,” while the Senate bill would “prohibit the discriminatory use of personal information by online platforms in any algorithmic process, to require transparency in the use of algorithmic processes and content moderation.”
- TAKE IT DOWN Act (H.R.8989, introduced by Rep. Maria Salazar (R-FL) with Reps Madeleine Dean (D-PA), August Pfluger (R-TX), Stacey Plaskett (D-VI), Vern Buchanan (R-FL), and Debbie Dingell (D-MI): A companion bill to Senate billS.4569 introduced last month, it would “require covered platforms to remove nonconsensual intimate visual depictions.” H.R.8929, introduced by Rep. David Schweikert (R-AZ), would“ prohibit digital platforms from using information about a user unless the user consents to such use, to ensure personal information is considered a property right.”
- Validation and Evaluation for Trustworthy Artificial Intelligence (VET AI) Act (S.4769, introduced by Sens. John Hickenlooper (D-CO) and Shelley Moore Capito (R-WV)). This bill would “require the Director of the National Institute of Standards and Technology (NIST) to develop voluntary guidelines and specifications for internal and external assurances of artificial intelligence systems.”
- Section 508 Refresh Act (S.4766, introduced by Sen. Bob Casey (D-PA) with Sens. Ron Wyden (D-OR), John Fetterman (D-PA), and Duckworth (D-IL)). The bill “would require federal agencies take several steps to ensure people with disabilities can use federal technology, including websites.”
- SAFE Supply Chains Act (S.4651, introduced by Sen. John Cornyn (R-TX) with Sen. Gary Peters (D-MI)).The bill would “require agencies to use information and communications technology products obtained from original equipment manufacturers or authorized resellers.”
- PAID Act of 2024 (H.R.8924, introduced by Rep. Young Kim (R-CA) with Rep. John Moolenaar (R-MI)). The bill would “require the Secretary of Commerce to identify and report on foreign adversary entities using intellectual property related to emerging technology without a license.”
- A bill (S.4792) to amend the Secure and Trusted Communications Networks Act of 2019 (H.R.4998) was introduced by Sen. Rick Scott (R-FL). The bill would “add communications equipment and services produced or provided by Shenzhen Da-Jiang Innovations Sciences and Technologies Company Limited and Autel Robotics to the list that the Federal Communications Commission is required to maintain under that Act”
- A bill (S.4668) to amend the National Defense Authorization Act for Fiscal Year 2018 was introduced by Sen. Jerry Moran (R-KS). The bill would “increase the effectiveness of the Technology Modernization Fund.”
- The Intimate Privacy Protection Act (proposed by Reps. Jake Auchincloss (D-MA) and Ashley Hinson (R-IA)). The bill would “amend section 230 of the Communications Act of 1934 to combat cyberstalking, intimate privacy violations, and digital forgeries.”
- The AIDE Act of 2024 (introduced by Sen. Peter Welch (D-VT) and Sen. Ben Ray Luján (D-NM)). The bill would “authorize the National Science Foundation to support research on the development of artificial intelligence-enabled efficient technologies.”
We welcome feedback on how this roundup could be most helpful in your work – please contact contributions@techpolicy.press with your thoughts.