September 2025 US Tech Policy Roundup
Rachel Lau, J.J. Tolentino, Ben Lennett / Oct 1, 2025Rachel Lau and J.J. Tolentino work with leading public interest foundations and nonprofits on technology policy issues at Freedman Consulting, LLC. Ben Lennett is the managing editor of Tech Policy Press. Isabel Epistelomogi, a policy and research intern with Freedman Consulting, also contributed to this article.

US Senator Ted Cruz (R-TX), chairman of the Senate Commerce Committee, at a hearing titled "AI’ve Got a Plan: America’s AI Action Plan" on Wednesday, September 10, 2025.
This month, AI policy and deregulation were again a focus in Congress as Sen. Ted Cruz (R-TX), Chairman of the Senate Commerce Committee, introduced a new federal AI framework built around the SANDBOX Act (S. 2750). The bill would empower the White House Office of Science and Technology Policy (OSTP) to establish a regulatory sandbox where AI companies could test products with two-year exemptions from existing federal rules. In a similar vein, Rep. Michael Baumgartner (R-WA) introduced “The American Artificial Intelligence Leadership and Uniformity Act” (H.R. 5388) in the House. The bill seeks to resurrect Sen. Cruz’s legislative efforts to preempt state AI regulation, with Baumgartner’s bill calling for a five-year moratorium on most state and local AI regulations.
At the same time, concerns about political influence over federal agencies intensified following the suspension of comedian Jimmy Kimmel by ABC/Disney. The move came after FCC Chairman Brendan Carr suggested revoking broadcast licenses of ABC affiliates over Kimmel’s remarks, and broadcaster Nexstar Media—also seeking FCC approval for a $6 billion merger—preemptively dropped Kimmel’s program. Observers warned that the Trump administration’s increasing use of merger reviews at the FCC and FTC to reward allies and punish critics represents an erosion of agency independence.
Beyond these high-profile headlines, federal agencies, Congress, civil society, industry, and the courts were active as well. The FTC launched new inquiries into how AI firms safeguard children and settled a decade-long case against Pornhub’s parent company. Meanwhile, the courts weighed in with landmark rulings, including Anthropic’s $1.5 billion copyright settlement, remedies in Google’s search monopoly case, and Amazon’s $2.5 billion “dark patterns” settlement.
Read on to learn more about September developments in US tech policy.
Sen. Ted Cruz Unveils Light-Touch Federal AI Policy Framework
Summary
US Senate Commerce Committee Chairman Ted Cruz (R-TX) introduced an AI policy framework designed to promote US leadership and innovation in AI development and deployment. As part of the framework, Sen. Cruz unveiled the Strengthening Artificial Intelligence Normalization and Diffusion By Oversight and eXperimentation (SANDBOX) Act (S. 2750), which would require the White House Office of Science and Technology Policy (OSTP) to establish a regulatory sandbox for AI developers “to test, experiment with, or temporarily offer AI products and services.” Cruz described the bill as a light-touch approach to AI regulation that would allow AI companies to apply for 2-year exemptions from “obstructive” federal rules to compete with China’s rising AI industry. Sen. Cruz stated that the legislation aligns with the Trump administration’s AI Action Plan, while OSTP Director Michael Kratsios voiced support for the SANDBOX Act.
Industry leaders and experts endorsed Sen. Cruz’s legislative framework for AI policy and applauded the SANDBOX Act. TechNet CEO Linda Moore stated that the organization is "grateful to Senator Cruz for his continued leadership and work to establish an AI policy framework that will support American innovation and strengthen our AI global leadership” and praised the Trump administration's efforts to establish standards that remove barriers to innovation. NetChoice Director of State and Federal Affairs Amy Bos released a statement supporting the SANDBOX Act as an “innovation-first approach that will keep us ahead of global rivals like China.” A Meta spokesperson commended Sen. Cruz, stating that the “regulatory sandbox proposal offers a broad scope that could enable a wide range of business practices, research, and AI technologies—not just a select few—to benefit from the program.”
In contrast, civil society organizations expressed concerns over Sen. Cruz’s policy framework and warned that industry-friendly policies could exacerbate AI’s risks and harms. Data & Society's Policy Director Brian J. Chen and Executive Director Janet Haven described the SANDBOX Act as a “liability shield” for the AI industry, enabling companies to “continue to discriminate, spread deepfakes, exacerbate mental health risks and surveil workers.” J.B. Branch, Public Citizen’s Big Tech accountability advocate, issued a statement calling on Congress to prioritize “legislation that delivers real accountability, transparency, and consumer protection in the age of AI” rather than providing companies with “hall passes” to avoid regulations. Sacha Haworth, Executive Director of The Tech Oversight Project, criticized the SANDBOX Act as a way “for the Trump Administration to strip away standards that hold Big Tech accountable for violating privacy, endangering kids online, and letting scammers rip off seniors and veterans."
What We’re Reading
- Justin Hendrix and Ben Lennett, “US Senator Ted Cruz Proposes SANDBOX Act to Waive Federal Regulations for AI Developers,” Tech Policy Press.
- Brian J. Chen and Janet Haven, “Ted Cruz’s AI sandbox enables dangerous self-regulation, not innovation,” The Hill.
- Senator Ted Cruz, “Will AI’s Future be American?” Real Clear World.
Free Speech and the Politicization of Merger Reviews Take Center Stage
Summary
ABC/Disney temporarily suspended comedian Jimmy Kimmel this month following remarks by Federal Communications Commission (FCC) Chairman Brendan Carr suggesting the potential revocation of broadcast licenses held by ABC’s affiliate television stations. Before the suspension, Nexstar Media, which owns 32 affiliate stations, had already preempted the airing of the program. In addition to concerns about its broadcast licenses, Nexstar’s decision may have been influenced by its pending $6 billion merger, which requires FCC approval. Specifically, the company is seeking regulatory changes to ownership rules that currently limit the number of stations a single entity can control. If approved, Nexstar’s planned acquisition of more than 60 stations from TEGNA, Inc. would result in the combined entity reaching approximately 80 percent of US television households.
This was not the first instance in which a merger review under Chairman Carr—appointed by President Trump—raised questions about political influence. For example, the FCC approved the Paramount/Skydance merger only after Paramount resolved a $16 million lawsuit involving President Trump, and Skydance agreed to terminate its Diversity, Equity, and Inclusion (DEI) programs while appointing an ombudsman at CBS to evaluate complaints of bias against conservatives. Similarly, the Federal Trade Commission (FTC) approved the Omnicom–Interpublic advertising agency merger in part on the condition that the combined firm not maintain any policy that “declines to deal with Advertisers based on political or ideological viewpoints.” Although the FTC, like the FCC, is structured as an independent agency with five commissioners—including a chair—who are nominated by the President and confirmed by Congress, the Trump administration has increasingly exercised direct political influence over its operations. This was exemplified by the removal of Democratic Commissioners Rebecca Kelly Slaughter and Alvaro M. Bedoya earlier this year.
Although the FTC has carried out aspects of its consumer protection mandate, as evidenced by its enforcement actions this month on children’s privacy and deceptive practices, there is a concern that competition policy could be increasingly used by the Trump Administration to influence the news and information landscape. As former FTC Commissioner Bedoya argued recently, “the president is using his power to block mergers not to protect the public interest, or protect competition, but to punish his enemies and reward his friends.” In contrast, many of President Trump’s supporters largely welcomed Carr’s comments, though some Senate Republicans expressed concern. These developments highlight the extent to which the politicization of the FTC and FCC under the Trump Administration threatens to undermine the original democratic purposes of policies such as antitrust enforcement and media ownership restrictions, which were designed to curb monopoly power and safeguard a diversity of viewpoints.
What We’re Reading
- Cristiano Lima-Strong and Anish Wuppalapati, “100 Days of Trump: His Enforcers Are Waging War On Content Moderation. It’s Likely Just The Start,” Tech Policy Press.
- Angela Fu, “Media consolidation is shaping who folds under political pressure — and who could be next,” Poynter.
- John Hendel, “‘It’s the threats that are the point’: How Brendan Carr exerts his FCC power,” Politico.
Tech TidBits & Bytes
Tech TidBits & Bytes aims to provide short updates on tech policy happenings across the executive branch and agencies, Congress, civil society, industry, and courts.
In the executive branch and agencies:
- The Trump administration unveiled and signed an executive order, advancing the framework of a long-anticipated deal to separate TikTok’s US operations from ByteDance, a China-based company. The new deal, led by Oracle and the private equity firm Silver Lake, will require the US version of TikTok to receive a licensed copy of the ByteDance algorithm, with US data retained and monitored by Oracle. ByteDance is expected to retain no more than a 20% stake and will be excluded from TikTok’s security committee. Trump announced plans to issue an executive order granting a 120-day delay to finalize terms. The Chinese government has not formally approved the deal, though officials acknowledged “basic framework consensus.” Rep. John Moolenaar (R-MI), chair of the House Select Committee on the Chinese Communist Party, requested an urgent White House briefing following President Trump’s executive order to review deal details and determine compliance with the 2024 law.
- President Trump called on Microsoft to fire Lisa Monaco, a former Biden Administration Justice Department official who now serves as Microsoft’s President of Global Affairs. Trump wrote on social media that Monaco “ is a menace to U.S. National Security, especially given the major contracts that Microsoft has with the United States Government.”
- The Federal Trade Commission (FTC) launched an inquiry against major AI companies, including Alphabet, Character.ai, OpenAI, Snap, xAI, and Meta, requesting information related to how they “measure, test, and monitor potentially negative impacts of this technology on children and teens” and ensure their AI tools are in compliance with the Children’s Online Privacy Protection Act.
- The FTC took action against robot toy maker Apitor for violating the Children’s Online Privacy Protection Act (COPPA) by allowing a third-party Chinese software company to collect geolocation data from children without parental consent. The Apitor app, which controls robot toys for kids ages 6-14, required Android users to enable location sharing, which transmitted children’s location data to servers in China without notifying parents. As part of a settlement, Apitor must overhaul its data practices, ensure third-party compliance with COPPA, and delete any improperly collected data. A $500,000 penalty was imposed but suspended due to the company’s inability to pay.
- The FTC settled a decade-long lawsuit against Pornhub’s parent company, Aylo, over claims of “child sex abuse material (CSAM) and nonconsensual material (NCM) and failed to prevent the spread of the content.” Aylo will pay a $5 million penalty and adopt new compliance, consent, and data privacy programs.
- The FTC opened a call for public comments on their “Strategic Plan for Fiscal Years 2026-2030,” with comments accepted until October 17, 2025.
- The FCC initiated proceedings to revoke recognition from seven Chinese government-controlled electronics testing labs under new “Bad Labs” rules aimed at protecting US national security. The action was part of a larger effort to prevent foreign adversaries from overseeing labs that certify devices for the US market. FCC Chair Brendan Carr emphasized that the move will restore trust in equipment safety and America’s supply chain independence under Trump’s economic agenda.
In Congress:
- The House and Senate versions of the FY26 National Defense Authorization Act (NDAA) included provisions to accelerate AI adoption across the Department of Defense, particularly for logistics, mission-critical tasks, and public-private sandboxes. The House version emphasized AI’s impact on cybersecurity training, workforce development, and international cooperation. The Senate version focused on standardized risk frameworks, model governance, and cybersecurity safeguards. Outcomes remained in flux as the NDAA moved toward reconciliation.
- Sen. Mark Kelly (D-AZ) published an AI roadmap suggesting that leading AI companies fund a federal trust, the AI Horizon Fund, to invest in American workers through upskilling programs, modernized credentialing, and protections for displaced workers. The trust would require AI firms to help finance the water, power, and grid systems they rely on. The proposal also emphasized increasing public trust through stronger AI safety standards, oversight, and transparency.
- Sen. Chuck Grassley (R-IA) pressed Meta CEO Mark Zuckerberg over claims that the company tried to silence whistleblower Sarah Wynn-Williams, who testified before Congress about Meta’s alleged ties to China, Foreign Corrupt Practices Act (FCPA) violations, and practices targeting teens. Grassley cited a $50,000-per-violation non-disparagement clause and raised concerns that her severance deal may have violated SEC rules. Meanwhile, Sen. Josh Hawley (R-MO) called for Zuckerberg to testify over additional national security concerns.
- Nine Democratic senators sent a letter to Immigration and Customs Enforcement (ICE) Acting Director Todd Lyons inquiring about ICE’s reported use of a smartphone-based biometric surveillance app, Mobile Fortify. The app scans a person’s face or fingerprint and connects to vast federal databases. Lawmakers warned that the tool enables real-time “Super Queries” into data, including criminal records, immigration status, and personal details from commercial data brokers like LexisNexis. The senators cited concerns about racial bias, wrongful detentions, surveillance of protestors, and chilling effects on free speech. They demanded that ICE disclose usage policies, testing data, database use, and whether US citizens are being targeted.
- The Cybersecurity Information Sharing Act (CISA) of 2015 is poised to expire on September 30, amid a looming government shutdown and congressional impasse. If CISA lapses, there are concerns over weakened legal protections for private-sector companies sharing cyber threat intelligence with federal agencies. The law has shielded firms from liability when transmitting sensitive data and has underpinned US cyber defense for a decade. Efforts to extend the law failed in the Senate, including a continuing resolution and a scaled-back alternative from Sen. Rand Paul (R-KY). Post-shutdown negotiations are increasingly likely to find a resolution.
In civil society:
- A new report from the Congressional Progressive Caucus Center warned that AI is accelerating surveillance, discrimination, and job loss across workplaces, while eroding worker rights and union power. From biased hiring algorithms to invasive productivity tracking, the report contextualized how AI is being used to depend on employer control with little transparency or legal guardrails. The report called for federal action, including comprehensive AI labor standards, stronger enforcement, public-interest AI tools, and expanded safety, and expanded safety nets.
- The New Democrat Coalition released its 2025 Innovation Agenda, calling for aggressive investment in AI, quantum computing, biotech, and clean energy to counter China’s tech dominance and reignite inclusive US economic growth. The agenda proposed expanded STEM immigration, workforce reskilling, digital privacy protections, AI safety standards, new regional tech hubs, federal AI infrastructure, and a stronger innovation-government partnership built on predictability, trust, and transparency.
- The NYU Center for Technology & Public Policy released a report urging the US to build equitable, secure, and democratically governed public AI research infrastructure. While proposals like the National AI Research Resource (NAIRR) aim to expand access to data and models, the report warned that without strong safeguards, it could widen existing inequalities and promote corporate dominance. Key recommendations to democratize innovation included embedding cybersecurity, preventing dual-use risks, rejecting exclusive public-private partnerships, and directly supporting under-resourced institutions.
- Americans for Responsible Innovation (ARI) published a white paper on the growing national security concerns around the AI data annotation industry, which provides the human-labeled data essential to training advanced AI models. The paper warned that unchecked foreign involvement, particularly from adversities like China, could erode US leadership in AI, national security, and model integrity. The paper called for expanded screening of foreign investments, potential export controls, and limitations to adversary-controlled access, especially in critical sectors like infrastructure and defense.
- Immigration rights activist Dominick Skinner used AI facial reconstruction and reverse image search tools to identify masked Immigration and Customs Enforcement (ICE) officers. The ICE List Project claimed to have named over 100 ICE employees, prompting backlash from the Department of Homeland Security (DHS) and lawmakers. In response, Sen. Marsha Blackburn (R-TN) proposed the Protecting Law Enforcement from Doxxing Act, which would criminalize exposing and doxxing federal officers, and sent a letter to the CEO of PimEyes asking about how their technology is being used to identify ICE officers. Sen. Gary Peters (D-MI) and other Democrats also expressed concern about the dangers of masked law enforcement and AI misuse.
- The UC Berkeley Labor Center published a report, “The Current Landscape of Tech and Work Policy in the U.S.: A Guide to Key Laws, Bills, and Concepts,” providing an overview of legislative momentum on regulating digital workplace technologies. The report covers bills on electronic monitoring, algorithmic management, data privacy, automation and job loss, and other issues.
In industry:
- NetChoice, a trade association representing Amazon, Google, Meta, Snap, and other major tech players, launched a new super PAC. Known for its legal fights defending Section 230, NetChoice’s move came amid growing bipartisan pressure to reform the provision.
- Meta launched a new super PAC, the American Technology Excellence Project, which pledges “tens of millions” to back state-level candidates supportive of the artificial intelligence industry. This marked the second super PAC Meta has unveiled in a month, following the launch of META California, focused on AI policy at the state level. The new PAC will focus on electing pro-tech candidates and fending off what Meta calls “poorly crafted” AI bills across the US.
- Microsoft and OpenAI signed a non-binding deal allowing OpenAI to restructure into a for-profit company. Details on how much of OpenAI Microsoft will own or whether Microsoft will retain exclusive access to OpenAI’s latest models were not disclosed. Attorneys general in California and Delaware must approve OpenAI’s new structure for the change to go into effect.
- OpenAI announced new safeguards to detect and respond to users showing signs of mental health distress and danger, including helping users reach suicide hotlines. Meta also announced plans to introduce new mental health safeguards on its AI chatbots for signs of suicide, self-harm, and eating disorders, suggesting that its chatbots will now connect teen users to mental health resources. These recent moves came in response to growing scrutiny over the impact that AI systems have on youth mental well-being.
- Apple quietly revised internal AI training policies following Trump’s return to the White House, according to documents obtained by POLITICO. Updates included reclassifying DEI as a “controversial” topic, expanding sensitivity around Trump, and flagging references to Apple’s leadership as brand risks. Apple denied the policy changes, citing its Responsible AI Principles.
- YouTube responded to subpoenas issued by the House Committee on the Judiciary on their content moderation policies surrounding the COVID-19 pandemic and freedom of expression, arguing that “no matter the political atmosphere, YouTube will continue to enable free expression on its platform, particularly as it relates to issues subject to political debate.”
- Amid mounting employee and investor pressure, Microsoft terminated its cloud and AI services used by Israel’s military after uncovering that its Azure platform was deployed to store and analyze millions of intercepted Palestinian phone calls. The decision followed an investigation that revealed the scope of Israel’s secret mass surveillance program utilizing Microsoft’s digital infrastructure. Microsoft President Brad Smith told the staff the company would not support “mass surveillance of civilians” anywhere in the world.
- The Business Software Alliance (BSA), a global trade association, urged governments to act now on quantum readiness with a six-point strategy: software R&D, workforce development, post-quantum cryptography, and international cooperation. The report highlighted the urgency of investing in quantum-specific software, fostering industry adoption, and upgrading digital infrastructure to meet coming threats.
In the courts:
- A divided Supreme Court announced that it will hear arguments on the Trump Administration’s removal of FTC Commissioner Rebecca Kelly Slaughter in December and that it will allow President Trump to keep Slaughter out of the agency in the meantime, following Slaughter’s reinstatement by a federal appeals court earlier in the month. The Supreme Court stated that it would consider the broader question of whether presidents can remove independent regulators without cause. The three liberal Supreme Court justices dissented, with Justice Kagan saying that the court’s order allows the president to “extinguish the agencies’ bipartisanship and independence.”
- Amazon agreed to pay $2.5 billion to settle a Federal Trade Commission (FTC) lawsuit alleging that the company deceptively enrolled users in Prime and made it difficult to cancel their subscription. The settlement was one of the largest in FTC history and included $1 billion in civil penalties and $1.5 billion in restitution to impacted customers. The FTC argued that Amazon utilized manipulative design, or “dark patterns,” to trap users into recurring subscriptions. Amazon did not admit wrongdoing but will notify eligible users about compensation and streamline the cancellation process.
- A federal judge ruled that Google must share its search data with “qualifying competitors” to resolve its monopoly on search results. The ruling also forced Google to restrict its payments, ensuring that its search engines receive preferential placement on web browsers and smartphones. The decision was considered one of the most significant attempts to “level the tech playing field” in the last 20 years; however, the court stopped short of Google’s worst-case scenario of forcing the company to sell Chrome. Google is likely to appeal the decision.
- Anthropic agreed to pay $1.5 billion to settle a landmark copyright lawsuit over accusations that the company used over 500,000 pirated books to train Claude, the largest settlement in US copyright history. Anthropic was accused of illegally downloading books from “shadow or pirated” libraries. As part of the deal, Anthropic must destroy the pirated data and could still face future infringement claims. The ruling clarified that training AI models with legally acquired books is considered “fair use.”
Legislation Updates
The following bills made progress across the House and Senate in September:
- Digital Asset Market Clarity Act of 2025 (CLARITY Act) – H.R. 3633. Introduced by Rep. J. French Hill (R-AR), the bill passed the House and was sent to the Senate.
- Generative AI Terrorism Risk Assessment Act – H.R. 1736. Introduced by Rep. August Pfluger (R-TX), the bill advanced through the House Committee on Homeland Security.
- Romance Scam Prevention Act – S. 841. Introduced by Sen. Marsha Blackburn (R-TN), the bill advanced through the Senate Committee on Commerce, Science, and Transportation.
The following bills were introduced in the Senate in September:
- SANDBOX Act – S. 2750. Introduced by Sen. Ted Cruz (R-TX), the bill would “require the Director of the Office of Science and Technology Policy to establish a Federal regulatory sandbox program for artificial intelligence, and for other purposes.”
- Children Harmed by AI Technology Act (CHAT Act) – S. 2714. Introduced by Sen. Jon Husted (R-OH), the bill would “require artificial intelligence chatbots to implement age verification measures and establish certain protections for minor users, and for other purposes.”
- RAISE Act of 2025 – S. 2740. Introduced by Sen. Jon Husted (R-OH), the bill would “amend the Elementary and Secondary Education Act of 1965 to encourage States to develop academic standards for elementary school and secondary school for artificial intelligence and other emerging technologies.”
- Consumer Safety Technology Act – S. 2766. Introduced by Sen. John R. Curtis (R-UT), the bill would “direct the Consumer Product Safety Commission to establish a pilot program to explore the use of artificial intelligence in support of the mission of the Commission and to direct the Secretary of Commerce and the Federal Trade Commission to study and report on the use of blockchain technology and tokens, respectively.”
- A resolution expressing the sense of the Senate that the comments made by Federal Communications Commission Chairman Brendan Carr… – S.Res. 407. Introduced by Sen. Edward J. Markey (D-MA), the resolution expressed “the sense of the Senate that the comments made by Federal Communications Commission Chairman Brendan Carr on Wednesday, September 17, 2025, threatening to penalize ABC and Disney for the political commentary of ABC late night host Jimmy Kimmel were dangerous and unconstitutional.”
The following bills were introduced in the House in September:
- The American Artificial Intelligence Leadership and Uniformity Act – H.R. 5388. Introduced by Rep. Michael Baumgartner (R-WA), the bill would “establish a clear, national framework for Artificial Intelligence (AI) development by preempting conflicting state-level regulations and codifying President Trump’s executive order on Artificial Intelligence.”
- Fair Artificial Intelligence Realization Act (FAIR Act) – H.R. 5315. Introduced by Rep. Harriet Hageman (R-WY), the bill would “prohibit the Federal procurement of large language models not developed in accordance with unbiased AI principles.”
- AI Sovereignty Act – H.R. 5288. Introduced by Rep. Eugene Vindman (D-VA), the bill would “direct the Secretary of Commerce to submit reports on strategies regarding the development of, and research relating to, critical artificial intelligence technologies, and for other purposes.”
- Protect Elections from Deceptive AI Act – H.R. 5272. Introduced by Rep. Julie Johnson (D-TX), the bill would “prohibit the distribution of materially deceptive AI-generated audio or visual media relating to candidates for Federal office, and for other purposes.”
- Literacy in Future Technologies (LIFT) Artificial Intelligence Act – H.R. 5584. Introduced by Rep. Thomas H. Kean (R-NJ), the bill would “improve educational efforts related to artificial intelligence literacy at the K through 12 level, and for other purposes.”
- Growing University AI Research for Defense Act (GUARD Act) – H.R. 5466. Introduced by Rep. Ronny Jackson (R-TX), the bill would authorize the Secretary of Defense, “to establish an AI Institute at a senior military college (SMC) to advance critical defense technologies, workforce development, and innovative applications for artificial intelligence to strengthen America's national security and defense capabilities.”
- AI Warnings And Resources for Education (AWARE) Act – H.R. 5360. Introduced by Rep. Erin Houchin (R-IN), the bill would “direct the Federal Trade Commission to develop and make available to the public educational resources for parents, educators, and minors with respect to the safe and responsible use of AI chatbots by minors, and for other purposes.”
- SHIELD Act of 2025 – H.R. 5215. Introduced by Rep. Haley M. Stevens (D-MI), the bill would “direct the Secretary of Defense to establish a pilot program to develop a training program that teaches members of the Armed Forces to interact with digital information in a safe and responsible manner, and for other purposes.”
- Algorithmic Accountability Act of 2025 – H.R. 5511. Introduced by Rep. Yvette Clark (D-NY), the bill would “direct the Federal Trade Commission to require impact assessments of certain algorithms, and for other purposes.”
- Expressing the sense of the House of Representatives… - H.Res. 694. Introduced by Rep. Greg Landsman (D-OH), the resolution expressed “the sense of the House of Representatives that the Centers for Medicare & Medicaid Services should halt the pilot program and should not jeopardize seniors’ access to critical health care by utilizing artificial intelligence to determine Medicare coverage.”
- Unleashing Low-Cost Rural AI Act – H.R. 5227. Introduced by Rep. Jim Costa (D-CA), the bill would “conduct a study on the impact of artificial intelligence and data center site growth on energy supply resources in the United States, and for other purposes.”
- Condemning attempts to use Federal regulatory power or litigation to suppress lawful speech… — H.Res. 748. Introduced by Rep. Yassamin Ansari (D-AZ), the resolution condemned “attempts to use Federal regulatory power or litigation to suppress lawful speech, particularly speech critical of a political party or the President of the United States, and warning against the rise of authoritarianism.”
- Liquid Cooling for AI Act of 2025 – H.R. 5332. Introduced by Rep. Jay Obernolte (R-CA), the bill would “direct the Comptroller General of the United States to conduct a technology assessment focused on liquid-cooling systems for artificial-intelligence compute clusters and high-performance computing facilities, require the development of Federal Government-wide best-practice guidance for Federal agencies, and for other purposes.”
- No Social Media at School Act – H.R. 5173. Introduced by Rep. Angie Craig (D-MN), the bill would “require social media companies to use geofencing to block access to their social media platforms on K-12 education campuses, and for other purposes.”
- HONOR Act – H.R. 5090. Introduced by Rep. Nancy Mace (R-SC), the bill would “amend the Uniform Code of Military Justice to expand prohibitions against the wrongful broadcast, distribution, or publication of intimate visual images, including digital forgeries, and for other purposes.”
We welcome feedback on how this roundup could be most helpful in your work – please contact contributions@techpolicy.press with your thoughts.
Authors


