December 2025 US Tech Policy Roundup
Rachel Lau, J.J. Tolentino, Shirley Frame, Ben Lennett / Jan 7, 2026Rachel Lau, J.J. Tolentino, and Shirley Frame work with leading public interest foundations and nonprofits on technology policy issues at Freedman Consulting, LLC. Ben Lennett is the managing editor of Tech Policy Press.

US President Donald Trump displays a signed executive order as (L-R) Sen. Ted Cruz (R-TX), Commerce Secretary Howard Lutnick and White House AI and crypto czar David Sacks look on in the Oval Office of the White House on December 11, 2025 in Washington, DC. (Photo by Alex Wong/Getty Images)
December’s US tech policy agenda was centered on executive action from the White House and a busy close to Congress. President Trump signed an executive order directing several federal agencies to review and potentially challenge state-level AI laws, in an effort to constrain the patchwork of state rules. The administration argued the approach was necessary to support competitiveness, particularly in relation to China. The order drew pushback from governors, lawmakers in both parties, and civil society groups, many of whom questioned its legal basis and objected to the use of threats to federal funding to pressure states to back down on AI regulation, even as Congress has made little progress on the issue.
Congress did advance legislative measures on AI related to national defense as well as children’s online safety. Lawmakers passed the 2026 National Defense Authorization Act, which included provisions addressing how the military and intelligence agencies assess and use AI systems. A House subcommittee also advanced over a dozen bills targeting online harms to children, including those from AI chatbots. These included legislation requiring companies to make more robust disclosures or to implement age verification. Though many of the narrower bills found bipartisan support, broader regulatory frameworks like KOSA and COPPA 2.0 exposed partisan disagreement over issues like federal preemption of state law and enforcement. Taken together, December reflected ongoing efforts by the executive branch to shape AI policy amid continued uncertainty about how Congress can align on a federal framework.
Read on to learn more about December developments in US tech policy.
President Trump signs executive order seeking to restrict state-level AI regulation
Summary
In December, President Trump signed an executive order titled “Ensuring a National Policy Framework for Artificial Intelligence” that directed the federal government to challenge and potentially override state AI laws. The executive order directed various federal agencies, including the Department of Justice, Department of Commerce, the Federal Communications Commission, and the Federal Trade Commission, to identify and challenge “onerous” state AI regulations. For example, it calls on the Department of Commerce to develop a policy that restricts states from leveraging specific funding under the federal Broadband Equity Access and Deployment (BEAD) program if they have “onerous AI laws.” The order also called on other federal agencies to evaluate their discretionary grant programs and potentially withhold funds to states that have AI laws that conflict with the administration's pursuit of “AI dominance.” President Trump framed the effort to replace a patchwork of state rules with a single national framework to keep pace with China. David Sacks, President Trump's top AI advisor, emphasized that the administration will refrain from targeting state laws that promote online child safety.
The executive order sparked significant bipartisan opposition from both state-level and federal lawmakers. Florida Governor Ron DeSantis (R) argued that “an executive order can’t block states” and insisted that Florida will continue to regulate AI. Just hours after the executive order was signed, California Governor Gavin Newsom (D) released a statement criticizing President Trump for “attempting to enrich himself and his associates, with a new executive order seeking to preempt state laws protecting Americans from unregulated AI technology.” Former Trump political adviser Steve Bannon criticized the administration in a deleted post on X for failing to limit state regulation while alienating the MAGA base, calling the executive order "entirely unenforceable." Sen. Alex Padilla (D-CA) issued a statement reaffirming California’s desire to “continue to lead the AI revolution,” despite the administration’s attempted attacks on “state leadership and basic safeguards.”
Similarly, civil society overwhelmingly opposed the executive order. Maya Wiley, President and CEO of the Leadership Conference on Civil and Human Rights, denounced the executive order, calling it “another example of the current administration bullying state and local governments to get what it wants, threatening lawsuits and the revocation of federal funding while making hollow carveouts.” Cody Venzke, Senior Policy Counsel at the American Civil Liberties Union, released a statement calling the executive order a “dangerous policy that the Republican-led Congress has rejected not once, but twice: displacing states from their critical role in ensuring that AI is safe, trustworthy, and nondiscriminatory.” Public Citizen Co-President Robert Weissman encouraged states to “continue their efforts to protect their residents from the mounting dangers of unregulated AI” and noted he expects the order to be challenged and defeated in court. Similarly, Center for Democracy and Technology President and CEO Alexandra Givens urged state lawmakers to “not be deterred in their efforts to provide guardrails for AI,” and reaffirmed that “only Congress can preempt state laws, and it has now twice correctly decided not to pursue this misguided and unpopular policy.”
What We’re Reading
- Alan Butler, “The Preemption Fight Goes Far Beyond AI. States Must Persist,” Tech Policy Press.
- Leah Frazier, “How Trump’s AI Executive Order Gets It Wrong on Civil Rights,” Tech Policy Press.
- Justin Hendrix, “A Critical Look at Trump's AI Executive Order,” Tech Policy Press.
- Jasmine Mithani, “How Might Trump’s AI Executive Order Impact State Laws Regulating Nonconsensual Deepfakes?” Tech Policy Press.
- Oliver Sylvain, “Why Trump’s AI EO Will be DOA in Court,” Tech Policy Press.
Congress takes year-end action on AI for defense and online child safety
Summary
In December, Congress raced to close out the year, with lawmakers advancing significant policy measures on artificial intelligence and online child safety. First, Congress enacted the 2026 National Defense Authorization Act (NDAA), which will significantly impact the military and intelligence communities' use of artificial intelligence, including provisions that establish some very limited guardrails. For example, the defense bill mandates that intelligence agencies track and evaluate the performance of their AI models regarding safety, efficacy, and fairness, and requires similar testing standards for publicly available models like ChatGPT when used by these agencies. At the same time, the legislation incorporates a "rule of construction" preventing agencies from compelling vendors to alter models to favor specific viewpoints, a provision interpreted as a response to concerns regarding "woke AI."
Parallel to the NDAA, the House Subcommittee on Commerce, Manufacturing, and Trade marked the end of its session by advancing 18 bills aimed at the "online epidemic" facing children, including specific measures to regulate AI interactions with minors (see full list of bill below in “Legislation Updates”). Among the approved legislation was the "Safe Bots Act," which prohibits AI chatbots from impersonating licensed professionals like doctors or therapists and requires clear disclosures to minors that they are communicating with artificial intelligence. The committee also advanced the "AWARE Act" to provide educational resources regarding AI chatbots and the “SCREEN Act,” which seeks to align with Texas’s state law that requires websites (specifically those with sexually explicit content) to implement age verification systems, similar to ID checks in physical stores, to prevent minors from accessing adult content.
However, the year-end legislative push highlighted significant partisan fissures, particularly regarding the broader child safety frameworks of KOSA and COPPA 2.0. While Republicans favored advancing these bills to create a national standard, many Democrats opposed them, arguing that broad preemption clauses would "wipe out" stronger existing state privacy laws and grant tech companies immunity from stricter state-level enforcement. Those unresolved debates took on greater urgency in the wake of the Trump Administration’s AI moratorium, which threw state efforts to regulate AI into limbo and may place greater responsibility on Congress to legislate.
What We’re Reading
- Amos Toh, The Good, Bad and Really Weird AI Provisions in the Annual US Defense Policy Bill, Tech Policy Press.
- Justin Hendrix, US House Subcommittee Advances 18 Child Online Safety Bills, Tech Policy Press.
Tech TidBits & Bytes
Tech TidBits & Bytes aims to provide short updates on tech policy happenings across the executive branch and agencies, Congress, civil society, industry, and courts.
In the executive branch and agencies:
- The State Department imposed visa restrictions on five individuals for allegedly leading “organized efforts to coerce American platforms to censor, demonetize, and suppress American viewpoints they oppose.” Individuals impacted included Imran Ahmed, CEO of the US-based Center for Countering Digital Hate and US permanent resident, as well as four Europeans. The restrictions were temporarily blocked by a US judge after Ahmed sued officials for violating his rights to free speech and due process.
- Sean Plankey’s nomination to lead the Cybersecurity and Infrastructure Security Agency stalled after Senate leaders excluded him from a key vote advancing Trump administration nominees, making it unlikely he will be confirmed without being renominated in 2026.
- The Office of Personnel Management (OPM) launched the US Tech Force, a new initiative to recruit over 1,000 artificial intelligence specialists and digital experts into the federal workforce through one- and two-year fellowships. The program also includes partnerships with tech companies like Amazon, Apple, and Microsoft to mentor early-career technologists and modernize federal data infrastructure.
- The US Department of Defense rolled out Google’s Gemini for Government as the first enterprise AI system on GenAI.mil, with the goal of streamlining research, automating administrative workflows, and accelerating planning across the Pentagon. The contract marked the military’s largest deployment of generative AI to date.
- The IRS started rolling out Salesforce’s AI product Agentforce, marking its first major use of AI agents. The deployment aimed to summarize cases, accelerate research, and support overextended staff after Trump-era layoffs reduced the workforce by 25 percent. Officials emphasized that AI agents cannot make a final determination or disperse funds, framing the deployment as a controlled augmentation rather than an automated tax adjudication.
- President Trump signed an executive order creating the Genesis Mission, a federal initiative directing the Department of Energy’s national laboratories to integrate government scientific data with industry-built supercomputing infrastructure to train foundation AI models, automate research workflows, and accelerate industry breakthroughs. The order, described by the White House as comparable in ambition to the Manhattan Project, mobilized industry partners, including Nvidia, AMD, and Dell, to construct new facilities and expand computing capacity. The order also tasked the Office of Science and Technology Policy Director, Michael Kratsios, to coordinate data sharing across agencies and launch early scientific applications within 270 days.
In Congress:
- Former Reps. Chris Stewart (R-UT) and Brad Carson (D-OK) announced the creation of parallel Republican and Democratic super PACs, along with a new non-profit, Public First, to raise $50 million in support of candidates committed to stronger oversight of AI. The pair framed the effort as a necessary counterweight to “anti-safeguard” super PACs, pointing to broad voter support for guardrails as Congress debates federal preemption and the future of AI governance.
In civil society:
- A new study from the Searchlight Institute highlighted a "regulation gap" in US sentiment toward AI, where respondents overwhelmingly (67 percent) reported that they are concerned that the government is doing too little to regulate AI harms and risks, versus doing too much and stifling progress (12 percent). Respondents’ biggest concerns revolved around AI job displacement, privacy, and misinformation.
- 42 state and US territories attorney generals sent a letter to 13 top AI chatbot companies, including OpenAI, Anthropic, Meta, and Google, urging them to create safeguards to protect children from chatbot outputs that promote delusion, self-harm, and other dangerous behaviors. The letter cited reports of deaths, hospitalizations, and psychological harm linked to “sycophantic” AI responses. The document also demanded stronger safety testing, warning labels, independent audits, incident reporting, and mandatory training, with a compliance deadline of January 16, 2026. Notably absent from the signatories were the attorneys general of California and Texas, as both states have already passed their own AI regulations.
- A Consumer Reports and Groundwork Collaborative investigation uncovered an experiment by Instacart that used AI tools to price identical products differently between customers, with up to 23 percent variation in price. In response to the publication and subsequent blowback from consumers and regulators, Instacart ceased price experiments used by grocery retailers on its platform, but will still allow its partners to “test different types of promotions and discounts on their customers through the platform.”
- Nonprofit organization Future of Life Institute released its “Winter 2025 AI Safety Index,” warning that most advanced AI models fall short on core safety obligations related to large-scale existential AI risks like CBRN threats. Anthropic, OpenAI, and Google DeepMind topped the list, yet still earned only middling grades. Five other companies, including xAI, Meta, and DeepSeek, ranked in a lower tier, with reviewers citing the absence of safety frameworks and limited transparency. The report called for stronger oversight, independent evaluations, and less reliance on voluntary promises as AI competition intensifies.
- Austrian researchers revealed that WhatsApp’s contact-lookup feature allowed them to itemize 3.5 billion phone numbers, along with profile photos for over half of users and public “about” text for nearly a third. The researchers called the error “the most extensive exposure of phone numbers.” Meta framed the exposed information as “basic public data” and fixed the flaw only after the team demonstrated they could query roughly 100 million numbers an hour. Despite years of warnings about the vulnerability, the flaw made users’ data vulnerable to scammers or hostile governments that could have exploited the same loophole to target users, including in countries where WhatsApp is banned.
- Amnesty International released an investigative report on the “Intellexa Leaks,” providing unprecedented insight into Intellexa’s internal operations. The report unveiled how the company’s invasive Predator spyware tool retained remote access to customer surveillance logs and details other human rights violations connected to Intellexa products. Researchers linked Predator and Intellexa’s emerging ad-based infection tool, Aladdin, to ongoing attacks on journalists, activists, and lawyers worldwide.
In industry:
- TikTok CEO Shou Zi Chew confirmed in an internal memo that parent company ByteDance signed binding agreements to sell its US operations to TikTok USDS Joint Venture LLC. The deal, expected to close on January 22, 2026, values the US business at approximately $14 billion. Under the terms, a consortium led by Oracle, Silver Lake, and MGX will hold a 45 percent stake, while ByteDance will retain a 19.9 percent share, the maximum allowed under current foreign ownership laws. To address national security concerns, the deal mandates that TikTok’s recommendation algorithm be retrained specifically on US user data and hosted on Oracle’s cloud infrastructure. The new venture will be governed by a seven-member, majority-American board of directors and will hold exclusive authority over US data protection, content moderation, and software assurance.
- OpenAI confirmed that attackers accessed user information, including names, email addresses, location data, and device details, following a breach by the third-party analytics provider Mixpanel. The company emphasized that no chats, API keys, payment data, or government IDs were compromised, and their systems remained secure. While there is no evidence yet that the stolen data has been misused, OpenAI warned that it could be used to fuel phishing or social engineering attempts. In response, OpenAI removed Mixpanel and announced expanded security reviews for potentially compromised vendors.
- Venture capital firm Andreessen Horowitz (a16z) published a legislative framework designed to support “Little Tech.” Key recommendations included a national standard for model transparency, the creation of a National AI Competitiveness Institute (NAICI) to provide startups with shared compute and data resources, and a "safe harbor" approach that preserves federal authority over AI development while allowing states to police specific harmful uses such as fraud or discrimination. The roadmap calls for additional protections for minors, including parental controls and mandatory disclosures that distinguish chatbots from human professionals.
- The OpenAI Foundation announced the recipients of its inaugural People-First AI Fund, awarding $40.5 million in unrestricted grants to 208 nonprofits across the country. The funding is intended to support community organizations in their efforts to utilize AI, with many grantees experimenting with AI tools for the first time. Selected from nearly 3,000 applicants, the recipients represent a broad range of community services, including youth digital-skill programs, rural health centers, tribal education networks, libraries, arts groups, disability support services, immigrant rights organizations, and local economic mobility initiatives. OpenAI will release a second $9.5 million wave of board-directed grants focused on “transformative AI work” later this year.
- The pro-industry super PAC Leading the Future launched a $10 million ad and lobbying campaign to urge Congress to adopt a “uniform national AI policy” that would preempt emerging state laws. Backed by more than $100 million from major industry players, including Andreessen Horowitz, Greg Brockman’s family, Joe Lonsdale, and Perplexity, the PAC will leverage TV, digital, and social channels, as well as grassroots organizing.
Legislation Updates
The following bills made progress across the House and Senate in December:
- Standardizing Permitting and Expediting Economic Development (SPEED) Act – H.R. 4776. Introduced by Rep. Bruce Westerman (R-AR). The bill was passed by the House by a vote of 221-196.
- Kids Online Safety Act (KOSA) —H.R. 6484. Introduced by Rep. Gus Bilirakis (R-FL). The bill was forwarded to the House Committee on Energy and Commerce by a vote of 13-10.
- Children and Teens' Online Privacy Protection Act (COPPA 2.0) —H.R. 6291. Introduced by Rep. Laurel Lee (R-FL). The bill was forwarded to the House Committee on Energy and Commerce by a vote of 14-10.
The following bills were forwarded by the House Committee on Energy and Commerce Subcommittee on Commerce, Manufacturing, and Trade to the full committee by voice vote:
- Safe Social Media Act – H.R. 6290. Introduced by Rep. Kim Schrier (D-WA).
- No Fentanyl on Social Media Act – H.R. 6259. Introduced by Rep. Gabe Evans (R-CO).
- Promoting a Safe Internet for Minors Act – H.R. 6289. Introduced by Rep. Laurel Lee (R-FL).
- Kids Internet Safety Partnership Act – H.R. 6437. Introduced by Rep. Russell Fry (R-SC).
- AI Warnings and Resources for Education (AWARE) Act – H.R. 5360. Introduced by Rep. Gus Bilirakis (R-FL).
- Assessing Safety Tools for Parents and Minors Act – H.R. 6499. Introduced by Rep. Russ Fulcher (R-ID).
- Sammy’s Law – H.R. 2657. Introduced by Rep. Buddy Carter (R-GA).
- Safer Guarding of Adolescents from Malicious Interactions on Network Games (GAMING) Act – H.R. 6265. Introduced by Rep. Thomas Kean (R-NJ).
- Stop Profiling Youth and (SPY) Kids Act – H.R. 6273. Introduced by Rep. Mariannette Miller-Meeks (R-IA).
- Algorithmic Choice and Transparency Act – H.R. 6253. Introduced by Rep. Kat Cammack (R-FL).
- Safeguarding Adolescents From Exploitative (SAFE) Bots Act –H.R. 6489. Introduced by Rep. Erin Houchin (R-IN).
- Shielding Children's Retinas from Egregious Exposure on the Net (SCREEN) Act – H.R. 1623. Introduced by Rep. Craig Goldman (R-TX).
- Safe Messaging for Kids Act – H.R. 6257. Introduced by Rep. Neal Dunn (R-FL).
- App Store Accountability Act – H.R. 3149. Introduced by Rep. John James (R-MI).
- Parents Over Platforms Act – H.R. 6333. Introduced by Rep. Erin Houchin (R-IN).
- Don't Sell Kids' Data Act of 2025 – H.R. 6292. Introduced by Rep. Frank Pallone (D-NJ).
The following bills were introduced in the Senate in December:
- One Fair Price Act of 2025 – S. 3387. Introduced by Sen. Ruben Gallego (D-AZ), the bill would “prohibit certain uses of automated decision systems to inform individualized prices, and for other purposes.”
- Platform Accountability and Transparency Act – S. 3292. Introduced by Sen. Chris Coons (D-DE), the bill would “support research about the impact of digital communication platforms on society by providing privacy-protected, secure pathways for independent research on data held by large internet companies.”
- National Programmable Cloud Laboratories Network Act of 2025 – S. 3468. Introduced by Sen. John Fetterman, the bill would “establish a national programmable cloud laboratories network to enhance research efficiency, innovation, and collaboration, and for other purposes.”
- GUARD Act of 2025 – S. 3454. Introduced by Sen. John Cornyn (R-TX), the bill would “authorize the Secretary of Defense to establish one or more National Security and Defense Artificial Intelligence Institutes, and for other purposes.”
- QUIET Act – S. 3354. Introduced by Sen. John Curtis (R-UT), the bill would “amend the Communications Act of 1934 to require disclosures with respect to robocalls using artificial intelligence and to provide for enhanced penalties for certain violations involving artificial intelligence voice or text message impersonation, and for other purposes.”
- Workforce of the Future Act of 2025 – S. 3319. Introduced by Sen. Lisa Blunt Rochester (D-DE), the bill would “promote a 21st century workforce, to authorize grants to support emerging and advanced technology education, and to support training and quality employment for workers in industries most impacted by artificial intelligence.”
- AI Workforce PREPARE Act – S. 3339. Introduced by Sen. Jim Banks (R-IN), the bill would “better forecast and plan for the impact of artificial intelligence on the workforce of the United States, provide data to improve training programs for in-demand industry sectors and occupations, and for other purposes.”
- Health Care Cybersecurity and Resiliency Act of 2025 – S. 3315. Introduced by Sen. Bill Cassidy (R-LA), the bill would “require the Secretary of Health and Human Services and the Director of the Cybersecurity and Infrastructure Security Agency to coordinate to improve cybersecurity in the health care and public health sectors, and for other purposes.”
- A bill to prohibit the use of Federal funds to implement… – S. 3557. Introduced by Sen. Ed Markey (D-MA), the bill would “prohibit the use of Federal funds to implement the Executive order entitled ‘Ensuring a National Policy Framework for Artificial Intelligence.’”
- A bill to prevent fraud enabled… – S. 3495. Introduced by Sen. Amy Klobuchar (D-MN), the bill would “prevent fraud enabled by artificial intelligence, and for other purposes.”
- A bill to establish Federal agency technology… – S. 3410. Introduced by Sen. Andy Kim (D-NJ), the bill would “establish Federal agency technology and artificial intelligence talent teams to improve competitive service hiring practices, and for other purposes...”
The following bills were introduced in the House in December:
- AI for Main Street Act – H.R. 5764. Introduced by Rep. Mark Alford (R-MO), the bill would “amend the Small Business Act to require small business development centers to assist small business concerns with the use of artificial intelligence, and for other purposes.”
- AI Talent Act – H.R. 6573. Introduced by Rep. Sara Jacobs (D-CA), this bill would “establish Federal agency technology and artificial intelligence talent teams to improve competitive service hiring practices, and for other purposes.”
- REAL Act – H.R. 6571. Introduced by Rep. Bill Foster (D-IL), this bill would “require disclosure of the use of content by Federal officials that is created or manipulated using generative artificial intelligence in their publications, and for other purposes.”
- AI Training for National Security Act – H.R. 6530.Introduced by Rep. Rick Larsen (D-WA), the bill would “require the Chief Information Officer of the Department of Defense to include training on artificial intelligence cybersecurity issues for members of the Armed Forces and civilian employees of the Department of Defense, and for other purposes.”
- Protecting Families from AI Data Center Energy Costs Act – H.R. 6529. Introduced by Rep. Greg Landsman (D-OH), the bill would “require the Federal Energy Regulatory Commission to hold a technical conference on protecting residential ratepayers from increased costs associated with large loads, and for other purposes.”
- READ AI Models Act – H.R. 6461. Introduced by Rep. Sarah McBride (D-DE), the bill would “direct the National Institute of Standards and Technology to develop best practices and technical guidance on artificial intelligence model documentation, and for other purposes.”
- Ensuring Safe and Ethical AI Development Through SAFE AI Research Grants – H.R. 6402. Introduced by Rep. Kevin Kiley (R-CA), the bill would “require the National Academy of Sciences to establish a grant program to develop safe AI models and safe AI research, and for other purposes.”
- No Robot Bosses Act – H.R. 6371. Introduced by Rep. Suzanne Bonamici (D-OR), the bill would “prohibit certain uses of automated decision systems by employers, and for other purposes.”
- Ban AI Denials in Medicare Act – H.R. 6361. Introduced by Rep. Greg Landsman (D-OH), the bill would “prohibit the Secretary of Health and Human Services from testing the WISeR model, and amend title XI of the Social Security Act to prohibit the implementation of payment models testing prior authorization under traditional Medicare.”
- Artificial Intelligence Civil Rights Act of 2025 –H.R. 6356. Introduced by Rep. Yvette Clarke (D-NY), the bill would “establish protections for individual rights with respect to computational algorithms, and for other purposes.”
- Deepfake Liability Act – H.R. 6334.Introduced by Rep. Jake Auchincloss (D-MA), the bill would “amend section 230 of the Communications Act of 1934 and the TAKE IT DOWN Act to combat cyberstalking and intimate privacy violations, and for other purposes.”
- To promote a 21st century workforce… – H.R. 6621. Introduced by Rep. Emanuel Cleaver (D-MO), the bill would “promote a 21st century workforce, authorize grants to support emerging and advanced technology education, and support training and quality employment for workers in industries most impacted by artificial intelligence.”
- To promote transparency and accountability… – H.R. 6646. Introduced by Rep. Pramila Payapal (D-WA), the bill would “promote transparency and accountability in covered digital labor platform work, and for other purposes.”
We welcome feedback on how this roundup could be most helpful in your work – please contact contributions@techpolicy.press with your thoughts.
Authors



