Home

Donate
Analysis

July 2025 US Tech Policy Roundup

Rachel Lau, J.J. Tolentino, Ben Lennett / Aug 1, 2025

Rachel Lau and J.J. Tolentino work with leading public interest foundations and nonprofits on technology policy issues at Freedman Consulting, LLC. Ben Lennett is the managing editor of Tech Policy Press.

July 23, 2025—US President Donald Trump announces his AI Action Plan and signs executive orders at an event hosted by the All‑In Podcast and the Hill & Valley Forum. Source: White House

As the Trump Administration unveiled its AI Action plan this month, federal tech policy continued its sharp turn toward deregulation. The plan was organized around three pillars: accelerating innovation, building out infrastructure, and asserting global leadership. It called for the fast-tracked federal adoption of AI, the rollback of regulatory barriers, and a push to limit state-level rules for AI systems. Industry largely applauded the plan, while critics warned that its approach to AI governance prioritized industry speed over democratic safeguards. In addition to the plan, President Trump signed three executive orders: one prohibiting the use of “ideologically biased” AI tools by federal agencies, another aimed at streamlining permitting for major AI infrastructure projects, and a third promoting the export of US-built AI systems.

Meanwhile, the administration dramatically restructured the State Department, gutting offices long responsible for supporting internet freedom, human rights, and democratic access to digital tools. The moves effectively ended the US era of digital diplomacy and signalled a sharp pivot away from its past approaches to promoting internet freedom. This shift also coincided with a new willingness to use economic measures to shape the global tech landscape. The most prominent example was Brazil, where the Trump Administration imposed punitive tariffs and visa bans in response to what the President labeled “secret and unlawful censorship orders” against US social media platforms.

Read on to learn more about July developments in US tech policy.

The Trump Administration releases its AI action plan

Summary

The Trump administration released its long-anticipated AI Action Plan, outlining over 90 federal actions across three pillars: “Accelerating Innovation, Building American AI Infrastructure, and Leading in International Diplomacy and Security,” aimed at cementing US global leadership in AI development and deployment.

Key provisions in the roadmap included prioritizing deregulation by calling for rollbacks of environmental and federal rules seen as barriers to AI growth. The plan also focused on accelerating the federal government’s AI adoption, especially within the Department of Defense, through the creation of a “‘talent-exchange program’ to more quickly deploy AI expertise across agencies and the development of a new ‘procurement toolbox’ to make acquiring the tools more accessible.” Additionally, the plan directed federal agencies to contract only with developers of “ideologically neutral” models to procure “unbiased” AI products, and reinforced the administration’s desire for federal preemption. The plan urged agencies to “consider a state’s AI regulatory climate when making funding decisions” related to AI programs and directed the Federal Communications Commission to assess whether state AI rules could interfere with its enforcement authority. Internationally, the plan supported exporting full-stack AI technology to allies and called for diplomatic engagements to promote US-led AI governance standards while countering authoritarian influence in the space.

Alongside the AI Action Plan, President Trump also signed three executive orders, including one that bans government procurement of AI tools perceived as ideologically biased, one to streamline permitting for major AI infrastructure, and another promoting the export of US-based AI products.

Many tech companies and pro-business groups publicly praised Trump’s AI Action Plan as a comprehensive strategy to give the US a lead in the proclaimed AI race. Anthropic released a statement emphasizing that they were “encouraged by the plan’s focus on accelerating AI infrastructure and federal adoption, as well as strengthening safety testing and security coordination.” Palantir supported the plan in a post on X, stating “the Trump Administration has written the source code for the next American century.”

Similarly, xAI called the plan a “positive step toward removing regulatory barriers and enabling even faster innovation for the benefit of Americans and for humanity as a whole.” Arvind Krishna, IBM Chairman and CEO, applauded the administration for taking a “critical step towards harnessing AI for sustained economic growth and national competitiveness.” Victoria Espinel, CEO of Business Software Alliance, a global trade group representing the software and digital services industry, welcomed the plan, commending it for “addressing a range of issues including talent and workforce development, infrastructure and data, and AI governance that serve as pillars for successful AI adoption and US competitiveness.”

In contrast, civil society and Democratic lawmakers strongly opposed the administration’s AI Action Plan, calling out provisions related to federal preemption as being particularly problematic and criticizing Trump’s executive order on “anti-woke AI” in the federal government. Samir Jain, Vice President of Policy at the Center for Democracy and Technology, suggested that the plan includes “actively detrimental provisions” such as the administration hindering state-level efforts to document and mitigate AI harms.

Cody Venzke, Senior Policy Counsel at the American Civil Liberties Union, criticized the plan for attempting to limit state-level AI regulations despite the Senate overwhelmingly opposing federal preemption, stating that the “preemption effort stifles local initiatives to uphold civil rights and shield communities from biased AI systems in areas like employment, education, health care, and policing.” In a press release, Sen. Edward Markey (D-MA) urged AI companies such as Anthropic, Meta, and OpenAI, among others, to reject the AI Action Plan and Trump’s executive order on anti-woke AI, citing the action as unconstitutional and warning companies to “not become pawns in Trump’s effort to eliminate dissent in the US.” You can read more responses to the AI Action Plan here.

What We’re Reading

With State Department closures and Brazil tariffs, the US moves away from digital diplomacy

Summary

The United States has historically positioned itself as a global champion of internet freedom, advocating for an open, global internet grounded in human rights and freedom of expression. For over a decade, this agenda was backed by significant and tangible support for circumvention tools, digital security training, and direct aid to activists operating under authoritarian regimes, with the Bureau of Democracy, Human Rights, and Labor (DRL) at the State Department playing a central role. However, that era officially ended this month, with a significant restructuring of the State Department leading to the dismissal of over 1,350 employees and the merger or elimination of more than 300 bureaus.

In a commentary for Tech Policy Press, Konstantinos Komaitis summarized the moment bluntly: “The US Just Logged Off from Internet Freedom.” As he argues, “These changes are more than bureaucratic housecleaning. They represent a paradigmatic shift in America’s approach to diplomacy, development, and digital governance.” This shift was already well underway as the Trump Administration slashed support for digital rights initiatives, froze grants, and dismantled USAID within its first 100 days in power. Those actions have already decimated global digital rights organizations. The latest moves pull the plug entirely.

What has emerged in its place appears to be a strategy centered on coercion: tariffs, visa bans, and punitive executive orders aimed at governments accused of infringing on “supposed” US interests. Brazil has become the first major test case. In early July, President Trump announced 50% tariffs on Brazilian products, citing what he called Brazil's insidious attacks on Free Elections, and the fundamental Free Speech Rights of Americans.” As David Kaye, former United Nations Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, observed, Trump’s actions with respect to Brazil are “out of control, political interference disguised as trade policy.”

Shortly after, the administration unveiled a new policy targeting foreign officials involved in “censorship” of US citizens. The first targets were Brazilian judicial figures, including Supreme Federal Court Justice Alexandre de Moraes, who has clashed with US platforms like Rumble and X over content moderation and disinformation. Then came a sweeping executive order in late July, imposing a 40% duty on most Brazilian goods, explicitly linking the move to what Trump called “SECRET and UNLAWFUL Censorship Orders” directed at US social media companies—and to the legal treatment of former President Jair Bolsonaro.

The US’s pivot from championing Internet freedom to weaponizing trade in its name marks a dramatic departure from recent policy. Rather than supporting open internet principles through diplomacy and aid, the US is now using economic pressure and punitive measures to assert its interests abroad. It sends a clear signal: digital governance is now fair game in the broader arena of geopolitical power plays. And it forces other nations to ask whether defending their regulatory sovereignty may now come at a steep economic cost.

What We’re Reading

  • Konstantinos Komaitis, “The US Just Logged Off from Internet Freedom,” Tech Policy Press.
  • Laís Martins, Trump’s New Brazil Tariffs Aren’t About Trade, and They’re Not About Free Speech, Tech Policy Press.
  • Ramsha Jahangir, “Crying ‘Censorship,’ US Pressures Foreign Officials in Bid to Counter Tech Regulations,” Tech Policy Press.

Tech TidBits & Bytes

Tech TidBits & Bytes aims to provide short updates on tech policy happenings across the executive branch and agencies, Congress, civil society, industry, and courts.

In the executive branch and agencies:

  • President Trump announced $90 billion in investments by major US companies to establish Pittsburgh as a national hub for AI and data infrastructure. The initiative aimed to strengthen US economic competitiveness, energy capacity, and national security by expanding data centers, workforce training, and AI innovation efforts in Pennsylvania.
  • The Department of Energy announced that it selected four federal sites, including Idaho National Laboratory, Oak Ridge Reservation, Paducah Gaseous Diffusion Plant, and Savannah River Site, for expanded AI-related infrastructure, including large-scale data centers and power generation infrastructure. Industry representatives cited strong interest in the new infrastructure projects and stated that more project details and solicitations would be released in the coming months, including partner selection of private or institutional collaborators to carry out the projects.
  • The Government Accountability Office (GAO) published a report finding that federal government agencies’ use of generative AI grew ninefold between 2023 and 2024. The report found that the total number of reported AI use cases almost doubled during that period across the 11 agencies reviewed with AI inventories.

In Congress:

  • ​​The Senate voted 99–1 to remove a proposed 10-year moratorium on state-level AI regulation from the One Big Beautiful Bill. The amendment to remove the provision, championed by Sen. Marsha Blackburn (R-TN), highlighted bipartisan opposition to the moratorium and reinforced criticisms that the provision would have effectively blocked meaningful AI oversight at the state level. The move was celebrated by lawmakers and advocates who argued that the moratorium would have left families and communities unprotected from unchecked AI systems. Despite the provision failing to make it into the One Big Beautiful Bill, House Energy and Commerce Chair Brett Guthrie (R-KY) said Republicans will continue pushing for a freeze on state AI regulations. Guthrie said he would consider a shorter two-year moratorium to encourage Congress to establish a national AI framework. Senate Commerce Chair Ted Cruz (R-TX) also plans to revisit the issue in upcoming standalone AI legislation.
  • The House Appropriations subcommittee on Commerce, Justice, Science, and Related Agencies advanced a fiscal 2026 funding bill with expanded investments in AI, quantum technology, and advanced manufacturing alongside cuts to major science agencies. The National Science Foundation faced a proposed 23 percent budget cut, the National Telecommunications and Information Administration would receive a 20 percent cut, while the National Institute of Standards and Technology received a modest $122.8 million funding increase. Committee Democrats opposed the bill, arguing that it would undermine US scientific and economic competitiveness by reducing funding for STEM education, NASA, and other research initiatives.
  • The House released a draft of the 2026 National Defense Authorization Act (NDAA), which included several provisions related to AI and technology. These provisions included measures to strengthen the cybersecurity of AI technologies used by the Pentagon, require the creation of a software bill of materials for AI systems, and mandate AI security training for Department of Defense personnel.
  • The Senate voted 52 to 42 to confirm Arielle Roth as head of the National Telecommunications and Information Administration (NTIA), where she will oversee broadband grants, spectrum policy, and other AI initiatives. Despite Roth's previous experience at the Federal Communications Commission, some Democrats expressed concern about her alignment with Biden-era policies—specifically, whether she will continue the administration’s consumer protection measures and broadband equity priorities. As NTIA Administrator, she will influence major programs like the $42.45 billion Broadband Equity, Access, and Deployment program and may help shape policy on issues such as data privacy and child online safety.

In civil society:

  • The Center for Democracy & Technology (CDT) and the American Association of People with Disabilities (AAPD) published a report on the incorporation of privacy protections into inclusive design for assistive technologies.
  • The Electronic Privacy Information Center (EPIC) released a report calling for legislators to close loopholes in consumer privacy laws that allow financial institutions to sell consumer data.
  • Data & Society produced a brief on the “myths that are driving data center construction and speculation,” bringing a focus to the environmental protections bypassed to accelerate AI growth.

In industry:

  • Anthropic published a targeted transparency framework that emphasized that AI regulation “ should not impede AI innovation, nor should it slow our ability to realize AI's benefits—including lifesaving drug discovery, swift delivery of public benefits, and critical national security functions.” Their framework includes six main tenets: limiting transparency requirements to only the largest frontier model developers, creating and making public a Secure Development Framework for assessing and mitigating risk, publishing documentation of testing and evaluation procedures, protecting whistleblowers by explicitly prohibiting false statements about compliance, and establishing minimum standards for security and public safety.

In the courts:

  • The National Retail Federation sued New York State over a new law requiring retailers to disclose when customer data is used to set prices, arguing that it violates retailers’ First Amendment rights. The group claimed the law forced businesses to display misleading warnings about algorithmic pricing, which they argued actually benefited consumers through discounts and promotions. Governor Kathy Hochul signed the new law in May to increase pricing transparency.
  • NetChoice filed a federal lawsuit against Arkansas challenging two new laws that imposed limits on social media content and allowed parents to sue platforms if their child dies by suicide after viewing “harmful” material. NetChoice argued that the laws are unconstitutionally vague, restrict content for both adults and minors, and fail to provide clear compliance guidelines for social media platforms. This legal action followed a previous ruling that struck down an Arkansas age-verification law.

Legislation Updates

The following bills made progress across the House and Senate in July:

  • GENIUS Act - S. 1582. Introduced by Sen. Bill Hagerty (R-TN), the bill passed the Senate in June and then passed the House this month. It was signed into law by the President.
  • NTIA Policy and Cybersecurity Coordination ActH.R. 1766. Introduced by Rep. Jay Obernolte (R‑CA), the bill passed the House and was referred to the Senate Committee on Commerce, Science, and Transportation.
  • Traveler Privacy Protection Act of 2025S. 1691. Introduced by Sen. Jeff Merkley (D‑OR), the bill was marked up during a full committee executive session of the Senate Committee on Commerce, Science, and Transportation.

The following bills were introduced across the House and Senate in July:

  • Unleashing AI Innovation in Financial Services Act - H.R. 4801. Introduced by Rep. French Hill (R-AR), the bill “would promote Artificial Intelligence (AI) in financial services through regulatory sandboxes for AI test projects at federal financial regulatory agencies.” A companion bill was introduced in the Senate (S. 2528) by Sen. Mike Rounds (R-SD).
  • Stop AI Price Gouging and Wage Fixing Act of 2025H.R. 4640. Introduced by Rep. Greg Casar (D‑TX), the bill would “ban companies from using Artificial Intelligence (AI) to set prices or wages based on Americans’ personal data.”
  • Empowering App-Based Workers ActS. 2488 Introduced by Sen. Brian Schatz (D‑HI), the bill would “improve transparency on how app companies operate and help boost wages for rideshare drivers and delivery app workers.”
  • Transparency and Responsibility for Artificial Intelligence Networks (TRAIN) Act -S. 2455. Introduced by Sen. Peter Welch (D‑VT), the bill “allows copyright holders to access training records used for AI models to determine if their work was used—a process currently used for internet piracy.”
  • Preventing Recurring Online Abuse of Children Through Intentional Vetting of Artificial Intelligence Data Act (PROACTIV AI Data Act)S. 2381. The bill would “encourage artificial intelligence (AI) developers to identify, remove, and report known child sexual abuse material (CSAM) from the datasets they compile or obtain for use in training AI models to help proactively stop AI image generators from creating child pornography.”
  • Preparing Election Administrators for AI ActS. 2346. Introduced by Sen. Amy Klobuchar (D‑MN), the bill would “require the Election Assistance Commission (EAC), in consultation with the National Institute of Standards and Technology, to develop voluntary guidelines for election offices concerning artificial intelligence (AI) use and its associated risks.”
  • Federal Data Exploitation Accountability Act of 2025S. 2367. Introduced by Sen. Josh Hawley (R‑MO), the bill would “protect consumers’ data rights and hold Big Tech companies accountable for illegally pirating creators’ copyrighted works to train their artificial intelligence (AI) models.”

We welcome feedback on how this roundup could be most helpful in your work – please contact contributions@techpolicy.press with your thoughts.

Authors

Rachel Lau
Rachel Lau is a Project Manager at Freedman Consulting, LLC, where she assists project teams with research and strategic planning efforts. Her projects cover a range of issue areas, including technology, science, and healthcare policy.
J.J. Tolentino
J.J. Tolentino is a Senior Associate at Freedman Consulting, LLC where he assists project teams with research, strategic planning, and communication efforts. His work covers issues including technology policy, social and economic justice, and youth development.
Ben Lennett
Ben Lennett is the Managing Editor of Tech Policy Press. A writer and researcher focused on understanding the impact of social media and digital platforms on democracy, he has worked in various research and advocacy roles for the past decade, including as the policy director for the Open Technology ...

Related

Analysis
June 2025 US Tech Policy RoundupJuly 1, 2025

Topics