Home

Donate
Analysis

November 2025 US Tech Policy Roundup

Rachel Lau, J.J. Tolentino, Ben Lennett / Dec 3, 2025

Rachel Lau and J.J. Tolentino work with leading public interest foundations and nonprofits on technology policy issues at Freedman Consulting, LLC. Ben Lennett is the managing editor of Tech Policy Press. Isabel Epistelomogi, a policy and research intern with Freedman Consulting, also contributed to this article.

Among the leading stories for this month was a surge of AI-industry political spending ahead of the 2026 midterms and an antitrust ruling in favor of Meta. Leading AI firms launched an array of super PACs—most prominently Leading the Future, which has raised over $100 million—to oppose state-level AI regulations and boost pro-AI candidates. Meta followed with two new PACs backing candidates supportive of AI innovation. Also in November, a federal judge dismissed the Federal Trade Commission’s long-running antitrust challenge to Meta’s acquisitions of Instagram and WhatsApp, ruling that the company is not a monopolist given competition from TikTok and YouTube—an outcome praised by industry advocates and sharply criticized by antitrust advocates.

There were several more key developments across the rest of the policy landscape, including in federal agencies and Congress. The executive branch renewed calls for a federal ban on state AI rules and expanded surveillance tools through ICE and the FBI. On Capitol Hill, lawmakers scrutinized Meta for scam advertising, reinstated key cybersecurity authorities, and introduced new bills to address AI-related fraud, job impacts, export controls, and national security. Civil society groups also forced the release of records detailing the NYPD's extensive use of faulty facial recognition technology, and Meta and Google filed separate challenges to California’s SB 976, a law that restricts social media feeds and other notifications for minors.

Read on to learn more about November developments in US tech policy.

Leading AI companies launch super-PACs in preparation for the 2026 midterm elections

Summary

As 2026 approaches, the AI industry is preparing for the upcoming midterm elections and beyond. AI industry leaders have already poured hundreds of millions of dollars into various super-PACs to combat state-level AI regulations and support pro-AI candidates. Earlier this year, a group of AI industry leaders launched Leading the Future, a new super-PAC that has already raised over $100 million from Andreessen Horowitz, OpenAI Co-Founder Greg Brockman, and Palantir Co-Founder Joe Lonsdale, among others. Leading the Future is generally opposed to AI regulations and will support AI-friendly candidates in state and national elections during the 2026 midterms and beyond.

This month, Leading the Future launched a multimillion-dollar campaign to defeat New York Assemblymember Alex Bores (D), lead sponsor of the state’s RAISE Act. The bill, which awaits New York Governor Hochul’s signature, would require AI firms to publish safety plans, disclose major incidents involving AI systems, and prohibit the release of “high-risk models.” Leading the Future argued that the bill would stall innovation and is “a clear example of the patchwork, uninformed, and bureaucratic state laws that would slow American progress and open the door for China to win the global race for AI leadership.” Assemblymember Bores is the first target that Leading the Future has announced in its planned nationwide push to favor pro-AI candidates. Beyond state-level races, Build American AI, an affiliated organization of Leading the Future, announced a $10 million advertising campaign advocating for a single federal regulation of AI that preempts state laws.

Similarly, Meta recently launched two super-PACs aimed at supporting pro-AI candidates. The American Technology Excellence Project pledged “tens of millions” to back AI-friendly state-level candidates across the country, while the Mobilizing Economic Transformation Across (META) California launched to support California candidates who favor “AI innovation over stringent regulations.”

The AI industry’s efforts to target state-level AI regulation have provoked bipartisan criticism and opposition. To counter Leading the Future, Former Reps. Chris Stewart (R-UT) and Brad Carson (D-OK) have launched separate Republican and Democratic super-PACs aiming to raise $50 million to support candidates who support AI safeguards and are "committed to defending the public interest against those who aim to buy their way out of sensible AI regulation.” The White House criticized the launch of Leading the Future as a “slap in the face,” particularly given Trump’s recent pro-AI executive orders banning “woke AI” and expanding data centers.

In response to being targeted by Leading the Future, Assemblymember Boros suggested that the AI industry is worried that he is the “biggest threat they would encounter in Congress to their desire for unbridled AI at the expense of our kids’ brains, the dignity of our workers, and expense of our energy bills.” Vermont State Rep. Monqiue Priestley (D) and Utah State Rep. Doug Fiefia (R) released a joint statement calling the surge of PAC money a “significant escalation in the fight over AI regulations” and suggested that AI companies “increasingly view state-level regulatory efforts as a threat.” Encode AI, a youth-led organization focused on AI trust and safety issues, described Meta’s PAC spending as a “modern day David v. Goliath playing out in the AI regulatory space."

What We’re Reading

  • Theodore Schleifer, “Fears About A.I. Prompt Talks of Super PACs to Rein In the Industry,” New York Times.
  • Paulo Carvão, “$150 Million AI Lobbying War Fuels The Fight Over Preemption,” Forbes.

Judge dismisses FTC antitrust case against Meta

Summary

The Federal Trade Commission’s (FTC) antitrust case against Meta took a significant hit this month as US District Court for the District of Columbia Judge James Boasberg dismissed the case against the company, finding that Meta is not a monopolist. The lawsuit, initiated five years ago during the first Trump administration, concerned Meta’s acquisitions of Instagram in 2012 and WhatsApp in 2014. The FTC alleged that Meta, then known only as Facebook, violated antitrust law by buying up these rivals rather than competing with them, thereby illegally monopolizing the market it defined as “personal social networking.”

Boasberg concluded that the FTC failed to establish that Meta held a monopoly in the "personal social networking” market. Citing the philosopher Heraclitus, he wrote that "no man can ever step into the same river twice," arguing that the market had changed dramatically since the lawsuit was filed, complicating the monopolization claims. The judge found that the FTC failed to prove that Meta’s apps did not compete with rivals like TikTok and YouTube, noting that consumers switch to TikTok and YouTube when Meta’s apps are unavailable, and vice versa, leading the court to conclude Meta is "not a monopolist insulated from competition."

The ruling prompted divergent reactions, with supporters, such as the Information Technology and Innovation Foundation (ITIF), calling the decision a win for the rule of law. ITIF declared Meta’s victory as the "end of the long-running crusade by a radical antitrust movement to dismantle America’s leading technology companies." Conversely, critics argued the ruling was "profoundly misguided" and "catastrophic" for antitrust accountability. In an op-ed, Tim Wu noted that the decision, despite strong evidence of monopoly power (like Meta’s extraordinary profits), signals that the world's wealthiest corporations are effectively "above the law." Meta’s win comes amid strong political alignment between tech companies and the Trump Administration, which has pursued a deregulatory agenda for tech domestically and abroad. As a consequence, the future trajectory of antitrust policy remains uncertain, and an appeal of the decision—typically expected—is not guaranteed.

What We’re Reading

  • Cristiano Lima-Strong, “Key Excerpts: Meta Wins Bout with FTC Over Instagram, WhatsApp Deals,” Tech Policy Press.
  • Naomi Nix and Will Oremus, “The government failed to break up Meta. It’s becoming a pattern,” The Washington Post.
  • Tim Wu, “The Bad Reasoning in the Meta Antitrust Ruling Isn’t Even the Worst Part,” New York Times.

Tech Tidbits & Bytes

Tech TidBits & Bytes aims to provide short updates on tech policy happenings across the executive branch and agencies, Congress, civil society, industry, and courts.

In the executive branch and agencies:

  • President Trump escalated his push for a federal ban on state-level AI regulation, calling on Congress to attach a moratorium to the National Defense Authorization Act, and drafting a potential executive order to direct DOJ lawsuits against states with AI laws. The move followed failed Republican efforts over the summer to pass a similar ban, which failed after significant bipartisan backlash. The draft order would invoke the Commerce Clause, which limits states’ regulation of interstate commerce, to argue for federal preemption; however, legal experts have warned that it exceeds executive authority. The move faced a divided GOP response, with critics, like Gov. Ron DeSantis (R-FL) and Sen. Josh Hawley (R-MO), rejecting the measure and calling the moratorium a Big Tech appeasement. Industry-aligned Republicans such as Sen. Ted Cruz (R-TX), in contrast, argued that a single national AI standard is key to maintaining US innovation and competitiveness with China.
  • The Privacy and Civil Liberties Oversight Board published a staff report on the Federal Bureau of Investigation’s (FBI) significant expansion of its use of open-source intelligence (OSINT), increasingly blurring the line between traditional law enforcement and commercial data. The report revealed the FBI’s deliberate push to harvest information from social media, public forums, and other online platforms, often with minimal internal oversight and ill-defined standards, under the claim of anticipating threats. While the FBI insisted its open-source collection avoids violations of the First Amendment, the report suggested that the agency routinely gathers and retains vast amounts of information on political speech, protest activity, and other constitutionally protected expressions.
  • ICE began using two new apps, Mobile Fortify and Mobile Companion, designed to identify, track, and monitor individuals. Mobile Fortify is reportedly being used to scan and verify individuals' faces, identities, and immigration statuses during on-the-spot encounters. The app was originally designed to verify travelers at US borders, but has now been repurposed into a mobile tool used by ICE and CBP agents within communities indiscriminately and without consent, including on US citizens. Mobile Companion allows agents to locate individuals and predict future movements by scanning license plates and accessing a vast network of personal data, including voter rolls, marriage records, and credit headers. The app can also collect and upload face scans to cross-reference individuals with the facial recognition database. Critics of the apps argued that usage is unconstitutional surveillance and a form of racial profiling.
  • The Federal Communications Commission (FCC) scaled back a Biden-era telecom cybersecurity rule requiring telecommunications carriers to implement safeguards to secure their networks from hackers. The previous rule had updated the 1994 Communications Assistance for Law Enforcement Act in response to last year’s Salt Typhoon breach, which compromised millions of phone records, including those of President Trump and Vice President JD Vance. FCC Chair Brendan Carr called the Biden-era regulations “ineffective” and instead proposed establishing a Council on National Security to protect critical networks from cyberattacks and implementing a ban on foreign adversary-linked facilities that review and approve technology for use in the US.
  • The Congressional Budget Office (CBO) announced that it had been hacked by a suspected foreign actor, though the origin of the hackers remains unspecified. Officials confirmed that the intrusion may have exposed emails, chats, and correspondence between lawmakers and the agency’s analysts. An internal alert from the Library of Congress warned staff to avoid emailing the CBO or sharing sensitive data through Microsoft Teams or Zoom.
  • A federal watchdog report found that the Consumer Financial Protection Bureau’s (CFPB) information security program is now “not effective,” citing a steep decline in cybersecurity oversight following significant staff cuts, halted contracts, and stalled modernization efforts under the Trump administration. The Federal Reserve’s Office of Inspector General downgraded CFPB’s cybersecurity maturity from level-4 (“managed and measurable”) to level-2, warning that the agency has failed to maintain proper system authorizations or document risk analyses.

In Congress:

  • Sens. Josh Hawley (R-MO) and Richard Blumenthal (D-CT) called on the FTC and SEC to launch enforcement actions against Meta following a Reuters investigation suggesting that Facebook and Instagram generated roughly 10% of 2024 revenue, about $16 billion, from illicit advertising, including scam promotions, banned goods, and impersonations of government officials. The senators cited evidence of widespread fraudulent ads, including crypto scams, AI-generated deepfakes, payment fraud, and fake offers for federal benefits, and argued Meta’s platforms may be implicated in over $50 billion in annual US scam losses, accusing the company of cutting safety staff while continuing to run ads impersonating government officials and political figures. Meta rejected the allegations as “exaggerated and wrong,” pointing to a 58% drop in scam reports.
  • Congress temporarily extended two critical cybersecurity laws that lapsed in September, reinstating the 2015 Cybersecurity and Infrastructure Security Act (CISA) and the State and Local Cyber Grant Program. The CISA provides legal protections for private companies to share cyber threat data with the federal government. Meanwhile, the grant program has allocated $1 billion in cybersecurity funding to state and local governments since 2022. The Business Software Alliance, representing Amazon Web Services, Cloudflare, Microsoft, and Oracle, praised the Senate for including the extensions, warning that continued delays risk deepening information-sharing gaps between public and private sectors.
  • Sen. Ron Wyden (D-OR), Rep. Adriano Espaillat (D-NY), and 38 Congressional Democrats urged 19 Democratic governors to block ICE from accessing state DMV data, citing concerns over potential misuse under the Trump Administration’s deportation agenda. In the letter, lawmakers warned that ICE conducted nearly 300,000 queries last year through the Nlets data-sharing system, often without a court order. While several states, such as New York, Illinois, and Washington, have already restricted ICE access to DMV data, others have yet to take action. The lawmakers emphasized that restricting bulk data access would not hinder criminal investigations, as agencies could still request records on a case-by-case basis.
  • Sen. Ron Wyden (D-OR) and Rep. Raja Krishnamoorthi (D-IL) urged the Federal Trade Commission (FTC) to investigate Flock Safety, the country’s largest license plate surveillance company, over potential data exposures to hackers and foreign actors. The lawmakers cited findings from cybersecurity firm Hudson Rock showing that passwords from at least 35 Flock customer accounts had been stolen, potentially granting access to billions of Americans’ license plate images.

In civil society:

  • A five-year lawsuit by Amnesty International and the Surveillance Technology Oversight Project (STOP) forced the NYPD to release more than 2,700 records revealing expensive, discriminatory surveillance of protestors and communities of color, including repeated rights-violating uses of facial recognition technology (FRT). The documents revealed that the NYPD deployed error-prone FRT to target people for innocuous behavior, such as speaking another language, wearing culturally specific attire, or posting anti-police messages on social media. The documents also showed that the NYPD stopped tracking the technology’s accuracy in 2015 after finding high error rates and detailed illicit contracting with a controversial vendor to monitor a private Instagram account, Black Lives Matter protestors, and instances in which New Yorkers were wrongly flagged based on race or language. The NYPD spent more than $5 million on FRT between 2019 and 2020. Amnesty International and STOP argued that the disclosures confirm asymmetrical abuses that violate privacy rights, renewing calls alongside the “Ban the Scan” coalition for New York City to ban facial recognition technology for law enforcement.

In industry:

  • The OpenAge initiative, a new tech-backed industry coalition, launched a device-based age verification standard in response to global pushes aimed at protecting children’s safety. Created by Singapore-based company k-ID and supported by Google, TikTok, Apple, and Amazon, the initiative proposed an “AgeKey” token to verify users’ ages without storing sensitive data in the long term. While OpenAge aims to preempt regulatory pressure by offering a privacy-conscious alternative to age verification, the GUARD Act, recently introduced by Sens. Hawley (R-MO) and Blumenthal (D-CT), may not view solutions like AgeKey as a reasonable age verification method under the criteria provided in the legislation.
  • OpenAI announced updates to ChatGPT enhancing the model’s ability to recognize distress, de-escalate sensitive conversations, and direct users toward real-world support. The updates were developed in collaboration with more than 170 clinicians and the company reported a 65-80% reduction in safety-noncompliant responses, stronger reliability in long conversations, and 92% compliance in challenging psychosis/mania tests compared to 27% in earlier models. Alongside these improvements, OpenAI released a “Teen Safety Blueprint” to guide policymakers and AI companies in designing safer AI systems for minors. The blueprint outlined five principles, including age identification, prohibitions on harmful content, default protections for uncertain ages, parental controls, and research-informed features.
  • OpenAI also suspended FoloToy’s access to its AI models after its chatbot-powered teddy bear, Kumma, was found giving children detailed instructions on lighting matches and engaging in sexually explicit conversations. The findings emerged from a Public Interest Research Group (PIRG) report that tested multiple AI-enabled toys, with Kumma displaying the most alarming breakdown in safety guardrails. Following the revelations, FoloToy halted sales of all products and launched an internal safety audit.
  • Character.ai announced that it will prohibit users under 18 from engaging in live conversations with its AI chatbots, starting on November 25, 2025. The announcement followed several lawsuits from US parents, including one involving a teen’s death, and investigations into disturbing chatbot behavior. While teens will still be able to use Character.ai to create content, such as videos, the company will introduce age-verification measures, including a new in-house age assurance model, and launch an AI safety lab to protect young users.
  • Google removed the Gemma AI model from its AI Studio after Senator Blackburn (R-TN) accused the system of fabricating a sexual assault allegation against her. In a letter to Google CEO Sundar Pichai, Blackburn detailed how Gemma falsely claimed she had a non-consensual relationship with a state trooper during a 1987 campaign, an event that never occurred, and cited fake or unrelated news sources.
  • Google’s Threat Intelligence Group (GTIG) identified the first confirmed use of generative AI models in active malware operations, revealing two new malware strains, PromptFlux and PromptSteal, that leverage large language models (LLMs) to dynamically evolve during execution. Deployed by Russian state-backed hackers, PromptFlux used Google’s Gemini API to rewrite and obfuscate its code on demand, while PromptSteal queried Hugging Face LLMs to generate one-line commands for data exfiltration and system reconnaissance. GTIG warned that AI can now be embedded directly into the attack lifecycle, along with a maturing underground marketplace for illicit AI tooling and growing misuse by state actors, such as North Korea, China, and Iran. Google responded by disabling malicious assets, updating its classifier and model protections, and reinforcing guardrails in Gemini and across its AI infrastructure.
  • Anthropic revealed the first documented case of an AI system independently executing a large-scale cyber espionage campaign, attributing the attack to a Chinese state-sponsored group. According to the company, the attackers jailbroke Claude Code AI, enabling it to largely autonomously infiltrate around 30 global targets, including tech firms, financial institutions, and government agencies, with success in several cases. Claude conducted 80 to 90 percent of the campaign’s operations without human involvement, scanning networks, writing exploit code, harvesting credentials and generating internal reports. Anthropic responded by banning accounts, notifying victims, and enhancing detection tools.
  • Israeli spyware company NSO Group was acquired by a new group of US-linked investors led by Hollywood producer Robert Simonds. The firm installed David Friedman, a former Trump Administration official and US Ambassador to Israel, as executive chairman. NSO Group, which previously faced US sanctions for facilitating surveillance of dissidents and officials, says it will now focus on securing “trustworthy” clients while pitching its technology to US law enforcement. NSO faces ongoing legal troubles, including a court-ordered ban on targeting WhatsApp, which the company is actively appealing, claiming the order would be "catastrophic" to its business. Critics of the acquisition warned that Pegasus continues to pose a major threat to civil liberties.
  • Amazon announced up to $50 billion in new investment to create the first purpose-built AI and high-performance computing for federal use through AWS, adding 1.3 gigawatts of capacity across new data centers beginning in 2026. The initiative, which supports GovCloud, AWS Secret, and Top Secret Cloud, aligns with the Trump Administration’s AI Action Plan and provides agencies with access to tools such as Bedrock, SageMaker, Nova models, and Anthropic Claude to accelerate national security, intelligence, and data-driven operations. AWS CEO Matt Garman said the effort will remove longstanding technology barriers and advance US leadership in AI.

In the courts:

  • Google, Meta, and TikTok filed separate lawsuits challenging California SB 976, a 2024 law that requires parental consent for minors to access algorithmically personalized social media feeds. The companies argued that the law’s vague language and parental opt-in requirement violate their First Amendment rights and infringe on minors’ access to lawful speech. YouTube claimed that the law would render its platform unusable for teens, while Meta argued that it unconstitutionally restricts editorial decisions around feed curation. TikTok said that the law "completely disregards the 50 preset safety, privacy, and security settings" on its platform. California’s Department of Justice defended the law as a necessary child protection measure, calling the lawsuit proof that platforms prioritize profits over youth safety. The companies asked to consolidate their cases with an existing NetChoice lawsuit against SB 976.
  • OpenAI opposed a court order requiring it to hand over 20 million anonymized ChatGPT conversations to The New York Times as part of a copyright lawsuit, arguing that it would violate user privacy and constituted a “speculative fishing expedition.” The company said that 99.99% of the requested data is irrelevant to the case and includes highly personal user information and content. Despite offering privacy-preserving alternatives, OpenAI reported that The New York Times rejected them and has continued to seek broad access. The company filed a motion to reverse the order, warning that compliance would undermine trust and jeopardize sensitive conversations from users unconnected to the lawsuit. US Magistrate Judge Ona Wang ultimately rejected OpenAI’s request and also ordered OpenAI’s attorneys to share internal communications with the court about data deletion.

Legislation Updates

The following bills made progress across the House and Senate in November:

  • NET ActS. 503. Introduced by Sen. John Hickenlooper (D-CO), this bill passed the Senate and is currently pending in the House.
  • AI-WISE ActH.R. 5784. Introduced by Rep. Hillary J. Scholten (D-MI), the bill advanced through the House Committee on Small Business by a unanimous vote of 27-0.
  • Pipeline Security ActH.R. 5062. Introduced by Rep. Julie Johnson (D-TX), the bill advanced through the House Committee on Homeland Security.
  • PILLAR ActH.R. 5078. Introduced by Rep. Andrew Ogles (R-TN), the bill passed the House and was received in the Senate.
  • Strengthening Cyber Resilience Against State-Sponsored Threats ActH.R. 2659. Introduced by Rep. Andrew Ogles (R-TN), this bill passed the House and was received by the Senate.
  • Generative AI Terrorism Risk Assessment ActH.R. 1736. Introduced by Rep. August Pfluger (R-TX), this bill passed the House and was received by the Senate.
  • Strengthening Oversight of DHS Intelligence ActH.R.2261. Introduced by Resident Commissioner Pablo Hernández (D-PR), this bill passed by the House.

The following bills were introduced in the Senate in November:

  • AI-Related Job Impacts Clarity ActS. 3108. Introduced by Sen. Josh Hawley (R-MO), the bill would require publicly traded companies and certain non-publicly traded companies designated by the Department of Labor to submit quarterly disclosures detailing the job impacts attributable to artificial intelligence.
  • GAIN AI Act of 2025S. 3150. Introduced by Sen. Jim Banks (R-IN), the bill would require companies seeking export licenses for advanced artificial intelligence chips bound for “countries of concern” to certify that US customers are given priority access to those chips.
  • Liquid Cooling for AI Act of 2025 S.3269. Introduced by Sen. David McCormick, the bill would direct the Government Accountability Office (GAO) to assess the research and development (R&D) needs and conditions affecting liquid cooling utilization in data centers;” and for other purposes.
  • Advanced Artificial Intelligence (AI) Security Readiness ActS.3202. Introduced by Sen. Todd Young (R-TN), the bill would direct “the NSA’s AI Security Center to develop an AI Security Playbook that identifies potential vulnerabilities and threats and establishes security strategies and contingency plans for America’s advanced AI systems.”
  • A resolution affirming the critical importance of preserving the United States' advantage in artificial intelligence…S.Res.490. Introduced by Sen. Chris Coons (D-DE), the resolution asserts that maintaining US dominance in artificial intelligence over China is essential to national security, economic leadership, and global technological influence.

The following bills were introduced in the House in November:

  • Securing Reliable Power for Advanced Technologies Act H.R. 5927. Introduced by Rep. Andy Barr (R-KY), the bill would amend the Defense Production Act of 1950 to accelerate “critical artificial intelligence infrastructure projects” by designating them as priority national defense projects.
  • Algorithm Accountability ActH.R.6266. Introduced by Rep. Mike Kennedy (R-UT), the bill would amend Section 230 of the Communications Act of 1934 to limit liability protections for certain social media platforms.
  • SPY Kids Act — H.R. 6273. Introduced by Rep. Mariannette Miller-Meeks (R-IA), the bill would create federal restrictions on how online platforms may collect, analyze, or use children’s and teens’ personal data for market or product-focused research.
  • Chip EQUIP ActH.R.6207. Introduced by Rep. Zoe Lofgren (D-CA), the bill would prohibit the use of CHIPS Act funding for projects that procure, install, or use certain fully assembled semiconductor manufacturing equipment produced by “foreign entities of concern.”
  • STRIDE Act H.R.6058. Introduced by Rep. Bill Huizenga (R-MI), the bill would direct the Department of State to coordinate with allied and partner nations to strengthen the global semiconductor supply chain's security and prevent the transfer of critical semiconductor technologies to China and other foreign adversaries.
  • AI Fraud Deterrence ActH.R. 6306. Introduced by Rep. Ted Lieu (D-CA), the bill would “enhance penalties for those who use artificial intelligence to commit fraud.“
  • DISRUPT ActH.R.5912. Introduced by Rep. Raja Krishnamoorthi (D-IL) and Del. James Camacho Moylan (R-GU), the bill would require intelligence assessments and interagency task forces to counter how China, Russia, Iran, and North Korea use technology to accelerate military modernization and undermine US security tools.
  • AI for America ActH.R. 6304. Introduced by Rep. Jennifer Kiggans (R-VA), the bill would “codify a national strategy for artificial intelligence that promotes American leadership, removes regulatory barriers, and ensures data are free from security risks and ideological bias.”
  • The Cyber Deterrence and Response Act of 2025H.R. 6309. Introduced by Rep. August Pfluger (R-TX), the bill would levy sanctions on “designated critical cyber threat actors.”
  • HEAL A.I. ActH.R. 6077. Introduced by Rep. Nanette Diaz Barragán (D-CA) on November 18, 2025, the bill would require updates to medical education and training programs to incorporate the effective and responsible use of artificial intelligence technologies in clinical and academic settings.
  • Artificial Intelligence for Advancing Literacy and Learning ActH.R.6159. Introduced by Rep. Luz Rivas (D-CA), the bill would create the Artificial Intelligence Literacy and Education Commission in the Office of Science and Technology Policy.

We welcome feedback on how this roundup could be most helpful in your work – please contact contributions@techpolicy.press with your thoughts.

Authors

Rachel Lau
Rachel Lau is a Project Manager at Freedman Consulting, LLC, where she assists project teams with research and strategic planning efforts. Her projects cover a range of issue areas, including technology, science, and healthcare policy.
J.J. Tolentino
J.J. Tolentino is a Senior Associate at Freedman Consulting, LLC where he assists project teams with research, strategic planning, and communication efforts. His work covers issues including technology policy, social and economic justice, and youth development.
Ben Lennett
Ben Lennett is the Managing Editor of Tech Policy Press. A writer and researcher focused on understanding the impact of social media and digital platforms on democracy, he has worked in various research and advocacy roles for the past decade, including as the policy director for the Open Technology ...

Related

Analysis
October 2025 US Tech Policy RoundupOctober 31, 2025
Analysis
August 2025 US Tech Policy RoundupSeptember 2, 2025
Analysis
July 2025 US Tech Policy RoundupAugust 1, 2025

Topics