Home

Donate

March 2025 US Tech Policy Roundup

Rachel Lau, J.J. Tolentino, Ben Lennett / Mar 31, 2025

Rachel Lau and J.J. Tolentino work with leading public interest foundations and nonprofits on technology policy issues at Freedman Consulting, LLC. Ben Lennett is the managing editor of Tech Policy Press.

WASHINGTON, DC—Federal Trade Commissioners Rebecca Kelly Slaughter (L) and Alvaro Bedoya (R) speak during a hearing in the Rayburn House Office Building on Capitol Hill on July 13, 2023. (Photo by Shuran Huang for The Washington Post via Getty Images)

March was underscored by President Donald Trump's controversial move to dismiss two Democratic commissioners, Alvaro Bedoya and Rebecca Kelly Slaughter, from the Federal Trade Commission (FTC), undermining the agency’s long-standing independence from the executive branch. Both commissioners contested the firings, arguing that the dismissals violated a Supreme Court precedent, which limited a president's ability to remove FTC commissioners without cause. The action left the FTC with only two Republican members, Chairman Andrew Ferguson and Commissioner Melissa Holyoak.

In parallel to the FTC controversy, the Trump Administration fired the 18F office, a team of technologists who served as in-house tech consultants for the federal government, and over 70 employees at the Commerce Department’s National Institute of Standards and Technology (NIST) as part of personnel cuts across the US federal government. NIST’s continued role in AI and cybersecurity, including the AI Safety Institute, became increasingly uncertain.

These developments occurred amid broader shifts in AI policy under the Trump Administration. The White House solicited thousands of public comments for its forthcoming AI Action Plan, aimed at “securing and advancing American AI dominance.” Industry leaders pushed for federal preemption of state rules and government adoption of AI, while civil society groups stressed risk mitigation and open-source development. Finally, the Department of Justice dropped its request in an antitrust case against Google that would have forced the company to sell its investments in AI companies, instead asking the court to require the company only to notify the government before making further acquisitions of AI companies.

Read on to learn more about March developments in US tech policy.

Trump Removes Democratic Commissioners from the FTC

Summary

President Donald Trump removed two Democratic Commissioners, Alvaro Bedoya and Rebecca Kelly Slaughter, from the Federal Trade Commission (FTC) this month, despite the fact that the FTC is an independent agency from the executive branch. The Trump Administration has challenged the FTC’s independence in multiple instances, with the president signing an executive order last month that claimed expansive authority over several independent federal regulatory agencies, including the FTC and FCC.

Both commissioners have challenged their firings, arguing they are illegal under a Supreme Court precedent established in Humphrey's Executor v. FTC (1935), which provides that a president cannot remove an FTC commissioner without cause, which both commissioners claim the Administration did not offer. Whether the courts will uphold the precedent is far from certain. Berin Szóka argues that the Court may reverse Humphrey's, given its recent decision in Seila Law v. Consumer Financial Protection Bureau (2020).

Democratic Senate and House leaders opposed President Trump’s actions. Sen. Amy Klobuchar (D-MN), who serves on the committee that oversees the FTC, argued in a statement that "President Trump's dismissal of Commissioners Slaughter and Bedoya is not only illegal but also hurts consumers...” Klobuchar and a majority of Democratic Senators sent a letter to the White House, arguing the President’s action “contradicts long standing Supreme Court precedent, undermines Congress’s constitutional authority to create bipartisan, independent commissions, and upends more than 110 years of work at the FTC to protect consumers from deceptive practices and monopoly power.” In contrast, Andrew Ferguson, the current Chairman of the FTC, appointed by President Trump, released a statement claiming the action was “necessary to ensure democratic accountability for our government.”

Removing Commissioners Bedoya and Slaughter could have significant consequences for the FTC, which now has only two Republican members, Chairman Ferguson and Commissioner Melissa Holyoak. Trump has reportedly selected Mark Meador for the third Republican seat, though he will need to be confirmed by the Senate. In an interview with Tech Policy Press, Commissioner Bedoya argued that the White House undermined the FTC's independence, which could potentially lead to the weakening or abandonment of enforcement actions against big tech firms, noting the importance of a bipartisan commission with dissenting voices to ensure robust scrutiny of issues and prevent politically motivated decisions.

What We’re Reading

  • Cristiano Lima-Strong, “A Conversation with Alvaro Bedoya on Trump's FTC Firings,” Tech Policy Press.
  • Berin Szóka, “Courts Won't Stop Trump’s Hostile Takeover of the FTC. Here's How to Resist,” Tech Policy Press.
  • Craig Aaron and Jessica J. González, “We Must Fight Back Against Trump’s Illegal FTC Firings,” Tech Policy Press.
  • Cristiano Lima-Strong, “Democrats at House Hearing: Trump Undermined Child Online Safety with FTC Firings,” Tech Policy Press.

White House Receives Thousands of Public Comments on Trump’s AI Action Plan

Summary

In February, the White House released a request for information to source public comments to inform the Trump Administration’s “AI Action Plan.” The Action Plan would define policy priorities and actions “to enhance America’s position as an AI powerhouse and prevent unnecessarily burdensome requirements from hindering private sector innovation.” In a White House press release, Lynne Parker, Principal Deputy Director of the Office of Science and Technology Policy, noted that the AI Action Plan “is the first step in securing and advancing American AI dominance.” Over 8,000 public comments, from a wide array of stakeholders, were submitted by the March 15 deadline.

While comments and policy priorities varied widely, leading industry firms pushed for regulatory clarity at the federal level, government adoption of AI, copyright protections, and increased funding for infrastructure to meet AI’s growing energy demands. For example, in their comments, OpenAI requested that the federal government preempt state-level regulations to provide relief from a growing “patchwork of regulations.” Google recommended that the federal government “lead by example” in the adoption and deployment of AI systems. Meta, OpenAI, and Microsoft reinforced claims that their use of copyrighted data was permissible “because the information was transformed in the process of training their models and was not being used to replicate the intellectual property of rights holders.” Anthropic suggested that the federal government should work with state and local governments to “reduce permitting burdens for new energy and center construction,” and should consider allocating federal funding toward “strategic infrastructure projects.”

Civil society groups submitted comments about mitigating AI’s risks, supporting workers, encouraging open-source AI development, and preserving the National Institute of Standards and Technology’s (NIST) AI Safety Institute. The Center for Data Innovation, the Center for New American Security, and the Center for Security and Emerging Technology (CSET) proposed creating a national database to “track AI failures, identify systemic weaknesses, and coordinate risk mitigation.” CSET also encouraged the federal government to increase funding for “technical apprenticeships and community college programs” to strengthen the American AI workforce. The Center for Democracy and Technology (CDT) recommended that open models retain “their central position in the American AI ecosystem” and pushed for the US to remain a leader in open-source AI development. CDT also urged NIST to continue to develop standards for mitigating AI’s risks and ensuring companies are safeguarding individual rights in AI, while the Center for AI and Digital Policy requested that the government fully fund NIST’s AI Safety Institute.

It remains unclear which policy proposals will be implemented into the final AI Action Plan, which is expected to be announced by July 2025.

What We’re Reading

  • Cecilia Kang, “Emboldened by Trump, A.I. Companies Lobby for Fewer Rules,” New York Times.
  • Cristiano Lima-Strong, “How Tech and Civil Society Are Nudging Trump on AI Policy,” Tech Policy Press.
  • Mark MacCarthy, “A technical AI government agency plays a vital role in advancing AI innovation and trustworthiness,” Brookings Institution.

Tech TidBits & Bytes

Tech TidBits & Bytes aims to provide short updates on tech policy happenings across the executive branch and agencies, Congress, civil society, industry, international governance, and courts.

In Congress:

  • The Senate Commerce Committee advanced the nominations of Mark Meador for Federal Trade Commissioner and Michael Kratsios for Director of the Office of Science and Technology Policy.
  • 36 Members of Congress sent a letter to Meta about the company’s “current firearms and related accessories advertising policy,” specifically on ads for objects that can be adapted into firearm silencers and suppressors.

In the executive branch and agencies:

  • The FTC removed over 300 posts published during the Biden Administration from their business guidance blog, including consumer protection information on AI and information on privacy lawsuits from the past four years.
  • The FCC established a new council for national security focused on countering cyber threats to US telecommunications from foreign adversaries, particularly the People’s Republic of China and the Chinese Communist Party.
  • President Trump signed an executive order creating a “Strategic Bitcoin Reserve and a US Digital Asset Stockpile,” which created a cryptocurrency reserve using government-owned tokens. Following the executive order, Trump hosted a cryptocurrency summit, bringing crypto industry leaders to the White House to discuss government-owned digital assets.
  • The General Services Administration (GSA) fired the 18F office, a team of technologists that served as in-house tech consultants for the federal government and had previously built successful government tech products such as the Internal Revenue Service’s (IRS) free tax-filing service.
  • Following significant personnel cuts, the GSA announced a new generative AI platform for GSA staff to access LLMs by Anthropic and Meta.
  • The Commerce Department fired over 70 National Institute of Standards and Technology (NIST) employees as part of the Trump Administration’s personnel cuts across the US federal government.

In industry:

  • In response to significant firings at the Commerce Department, multiple industry advocacy groups wrote letters to Secretary of Commerce Howard Lutnick expressing support for NIST and its role in AI and cybersecurity.
  • A Meta whistleblower alleged that Meta developed censorship tools and considered sharing user data with Beijing to gain access to the Chinese market between 2014 and 2017.
  • Google’s public policy team released a legislative proposal for kids’ online safety, calling Utah’s App Store Accountability Act an “example of concerning legislation” and instead proposing a five-part legislative framework including “privacy-preserving age signal shared only with consent, appropriate safety measures within apps, responsible use of age signals, no ads personalization to minors, and centralized parental controls.”

In civil society:

  • 118 researchers wrote a letter to policymakers “affirm[ing] the scientific consensus that artificial intelligence (AI) can exacerbate bias and discrimination in society, and that governments need to enact appropriate guardrails and governance in order to identify and mitigate these harms.”
  • The Center for Democracy and Technology (CDT) and the American Association of People with Disabilities (AAPD) published a report on the impact of AI on people with disabilities and providing recommendations for the inclusion of disabled people in the “creation, deployment, and auditing of these technologies and of the policies that govern them.”

In the courts:

  • The Department of Justice filed a revised proposed final judgment in the government antitrust case involving Google search. The revised judgment removed a requirement for Google to divest its acquired AI companies, including its 14 percent ownership of Anthropic, and replaced it with an obligation to provide prior notification to the government before completing the transaction.
  • US District Court for the Northern District of California Judge Beth L. Freeman granted a second motion for preliminary injunction in NetChoice v. Bonta. NetChoice filed suit against California Attorney General Robert Bonta, arguing that the California Age-Appropriate Design Code Act (AB-2273) was unconstitutional.

Legislation Updates

The following bills made progress across the House and Senate in March:

  • Emerging Innovative Border Technologies Act - H.R.993. Introduced by Rep. J. Luis Correa (D-CA), the bill was passed by the House and referred to the Senate Committee on Homeland Security and Governmental Affairs.
  • Understanding Cybersecurity of Mobile Networks Act - H.R. 1709. Introduced by Rep. Greg Landsman (D-OH), the bill advanced through the House Committee on Energy and Commerce.
  • NTIA Policy and Cybersecurity Coordination Act - H.R. 1766. Introduced by Rep. Jay Obernolte (D-CA), the bill would “amend the National Telecommunications and Information Administration Organization Act to establish the Office of Policy Development and Cybersecurity.” It advanced through the House Committee on Energy and Commerce.
  • Informing Consumers about Smart Devices Act - S.28. Introduced by Sen. Ted Cruz (R-TX), the bill advanced through the Senate Committee on Commerce, Science, and Transportation.
  • PLAN for Broadband Act - S. 323. Introduced by Sen. Roger Wicker (R-MS), the bill advanced through the Committee on Commerce, Science, and Transportation.
  • ADS for Mental Health Services Act - S. 414. Introduced by Sen. Dan Sullivan (R-AK), the bill advanced through the Committee on Commerce, Science, and Transportation.
  • Romance Scam Prevention Act - S. 841. Introduced by Sen. Marsha Blackburn (R-TN), the bill would “require online dating service providers to provide fraud ban notifications to online dating service members. It advanced through the Senate Committee on Commerce, Science, and Transportation.”

The following bills were introduced across the House and Senate in March:

  • To require a strategy to defend… - H.R. 2152.Introduced by Rep. Zachary Nunn (R-IA), the bill would “require a strategy to defend against the economic and national security risks posed by the use of artificial intelligence in the commission of financial crimes, including fraud and the dissemination of misinformation.”
  • To establish the National Artificial Intelligence Research… - H.R. 2385. Introduced by Rep. Jay Obernolte (R-CA), the bill would “establish the National Artificial Intelligence Research Resource.”
  • A bill to require the use of artificial intelligence… -S.1110. Introduced by Sen. Jon Husted(R-OH), the bill would “require the use of artificial intelligence to review agency regulations.”
  • Consumer Safety Technology Act - H.R. 1770. Introduced by Rep. Darren Soto (D-FL), the bill “directs the Consumer Product Safety Commission to establish a pilot program to explore the use of artificial intelligence in support of the mission of the Commission and to direct the Secretary of Commerce and the Federal Trade Commission to study and report on the use of blockchain technology and tokens.”
  • A bill to amend title 18… - S. 962. Introduced by Sen. James Lankford (R-OK), the bill would “amend title 18, United States Code, to preclude a provider of electronic communication service or remote computing service from receiving reimbursement or other compensation for information relating to child exploitation.”
  • Digital Integrity in Democracy Act - S. 840. Introduced by Sen. Peter Welch (D-VT), the bill would hold accountable operators of social media platforms that intentionally or knowingly host false election administration information.”

We welcome feedback on how this roundup could be most helpful in your work – please contact contributions@techpolicy.press with your thoughts.

Authors

Rachel Lau
Rachel Lau is a Project Manager at Freedman Consulting, LLC, where she assists project teams with research and strategic planning efforts. Her projects cover a range of issue areas, including technology, science, and healthcare policy.
J.J. Tolentino
J.J. Tolentino is a Senior Associate at Freedman Consulting, LLC where he assists project teams with research, strategic planning, and communication efforts. His work covers issues including technology policy, social and economic justice, and youth development.
Ben Lennett
Ben Lennett is managing editor for Tech Policy Press and a writer and researcher focused on understanding the impact of social media and digital platforms on democracy. He has worked in various research and advocacy roles for the past decade, including as the policy director for the Open Technology ...

Related

February 2025 US Tech Policy Roundup

Topics