Home

Donate
Analysis

January 2026 US Tech Policy Roundup

Rachel Lau, Shirley Frame, Ben Lennett / Feb 2, 2026

Rachel Lau and Shirley Frame work with leading public interest foundations and nonprofits on technology policy issues at Freedman Consulting, LLC. Ben Lennett is the managing editor of Tech Policy Press.

Illustration adapted from a vintage Spanish-language microbiology book. Calsidyrose/CC by 2.0

January’s US tech policy was marked by significant activity in Congress. As part of negotiations to avert a government shutdown, lawmakers advanced a bipartisan FY 2026 appropriations package that largely rejected the Trump administration’s budget proposal, which would have imposed significant cuts to federal science and technology funding. Though a partial government shutdown was still triggered, the House passed a package of bills that would stabilize or increase funding for core institutions such as the National Institutes of Health (NIH), the National Institute of Standards and Technology (NIST), and the National Science Foundation (NSF), with explicit support for artificial intelligence research, standards development, and shared research infrastructure.

The Senate also fast-tracked and unanimously passed the DEFIANCE Act, legislation aimed at strengthening protections and accountability around AI-enabled sexual exploitation, with the bill now under consideration in the House. The legislation was passed in response to intensifying concern over AI-enabled harms, catalyzed by the fallout from xAI’s Grok chatbot. The mass generation of non-consensual intimate imagery and child sexual abuse material through Grok prompted investigations and enforcement actions abroad and renewed pressure on US lawmakers to act.

Read on to learn more about January developments in US tech policy.

Congress moves FY 2026 funding package with tech and science implications

Summary

Congress advanced a series of FY2026 appropriation bills that largely rejected the deep cuts to federal science and technology funding proposed in President Trump’s budget request last year – cuts that would have been the largest since World War II. The House passed and sent to the Senate a bipartisan funding package including appropriations bills for the Labor, Health and Human Services, Education and Related Agencies; Defense; and Transportation, Housing and Urban Development and Related Agencies. The package stalled in the Senate in late January over disagreements on the Department of Homeland Security (DHS) appropriations bill, but passed after a bipartisan agreement to negotiate the Homeland Security appropriations separately, with a two-week deadline. A partial government shutdown began on January 31 as funding ran out for several federal agencies; the shutdown will likely be short-lived as House Speaker Mike Johnson signaled that the House will review and approve the Senate-passed package in early February.

If the package is approved by the Senate as written, it would stabilize and, in several cases, expand funding for federal technology and research agencies. The NIH would receive $48.7 billion, a $415 million increase from FY 2025 and a bipartisan repudiation of the Trump administration’s request to slash the agency’s budget by 40 percent. NIST would receive $1.85 billion, an increase of approximately $392 million from FY 2025 that would exceed the President’s request by nearly half a billion dollars. The NIST funding would include $1.2 billion provided to the Institute’s Scientific and Technical Research (STRS) and a targeted investment in AI growth. The appropriations package would also mandate a minimum of $55 million for NIST’s existing AI measurement science programs and allow up to $10 million for the US Center for AI Standards and Innovation to further AI testing and standards development. The package also would allocate $8.75 billion for the NSF, including $30 million for the National Artificial Intelligence Research Resource (NAIRR) pilot and dedicate funding to a number of other tech spending priorities, including monitoring of the CHIPS Act implementation and renewing the Technology Modernization Fund.

What We’re Reading

  • Gabby Miller, John Hendel, and John Hewitt Jones, “What the three-bill funding package means for tech,” Politico.
  • Matt Bracken, “Congress earmarks $5M for TMF in fiscal 2026 funding bills,” FedScoop.
  • Justin Doubleday, “Lawmakers boost funding for NIST after proposed cuts,” Federal News Network.
  • Andres Picon, “Lawmakers rake in earmarks for water, energy projects,” E&E News.
  • Clare Zhang, “Congress Set to Finalize Science Budgets Rejecting Trump Cuts”, AIP.
  • William Broad, “Congress Is Rejecting Trump’s Steep Budget Cuts to Science,” New York Times.

X’s Grok sparks global scrutiny over AI-generated non-consensual intimate imagery

Summary

In early January, xAI’s AI chatbot, Grok, became the center of a global crisis over deepfakes after users weaponized its image generation features to create non-consensual intimate imagery (NCII) and child sexual abuse material (CSAM) directly on the social media platform X. Unlike standalone "deepfake" apps, Grok allowed users to target victims publicly; by simply replying to a photo of a clothed person with prompts, the AI would generate and post sexualized images in the same thread, visible to millions. The scale of this abuse was unprecedented, with analyses suggesting that the tool generated upwards of 6,700 sexualized images per hour at its peak, targeting everyone from high-profile celebrities to private citizens and minors.

Critics and experts argued that these widespread harms were a predictable outcome of releasing powerful AI tools without "safety by design" principles. X's initial response to the crisis was dismissive, with leadership referring to reports as "legacy media lies" and Musk responding to evidence with laughing emojis. While the company eventually limited image generation capabilities for some users and claimed to implement fixes, for many experts, the incident has highlighted a significant regulatory gap: because Grok is integrated into a social platform, it is currently governed by reactive content moderation laws rather than proactive AI safety regulations, allowing harms to occur at a massive scale before enforcement can intervene.

Still, the incident triggered an immediate, though fragmented, global regulatory and legal response. Indonesia and Malaysia took the most drastic measures, temporarily blocking access to Grok entirely, while India’s Ministry of Electronics and IT (MeitY) demanded an immediate compliance audit and threatened to strip X of its "safe harbor" legal immunity if it failed to act. The European Union opened formal proceedings against X for violations of the Digital Services Act (DSA), and the United Kingdom’s Ofcom launched an investigation. In the US, lawmakers and officials at both the federal and state level condemned the spread of NCII and CSAM through Grok, with some state attorneys general signalling future investigations, though there has been no official response by any US regulatory agency. In response, the Senate fast-tracked the DEFIANCE ACT, which was introduced last year, and sent the bill to the House for consideration. Additionally, a class action lawsuit was filed against xAI, alleging the company negligently released a product that humiliates and exploits women for commercial profit, with more suits likely to follow.

What We’re Reading

  • Justin Hendrix, "Class Action Suit Filed Against xAI Over Grok 'Undressing' Controversy," Tech Policy Press.
  • Kaylee Williams, "Grok Supercharges the Nonconsensual Pornography Epidemic," Tech Policy Press.
  • Amber Sinha, "India Cautiously Locks Horns with X Over Grok ‘Undressing’ Controversy," Tech Policy Press.
  • Ramsha Jahangir, "Regulators Are Going After Grok and X — Just Not Together," Tech Policy Press.
  • Justin Hendrix, "The Policy Implications of Grok's 'Mass Digital Undressing Spree'," Tech Policy Press.
  • Justin Hendrix and Ramsha Jahangir, "Tracking Regulator Responses to the Grok 'Undressing' Controversy," Tech Policy Press.
  • Owen Bennett, "Why Europe Could Block X Over Grok Scandal But Probably Won’t," Tech Policy Press.
  • Eryk Salvaggio, "Why Musk is Culpable in Grok's Undressing Disaster," Tech Policy Press.
  • Bruna Santos and shirin anlen, "The Grok Disaster Isn't An Anomaly. It Follows Warnings That Were Ignored.," Tech Policy Press.

Tech TidBits & Bytes

Tech TidBits & Bytes aims to provide short updates on tech policy happenings across the executive branch and agencies, Congress, civil society, industry, and courts.

In the executive branch and agencies:

  • ProPublica published that the Department of Transportation (DOT) discussed plans to use Google’s Gemini to help draft federal regulations, aiming to significantly speed up the rulemaking process. The department presented the plan at a meeting in December 2025, sharing a sample document drafted by Gemini: a “Notice of Proposed Rulemaking” that resembled an actual filing. ProPublica shared that critics within DOT argued LLMs could be susceptible to errors and should not be utilized to interpret and draft proposed rules. Ben Winters, AI and privacy director at Consumer Federation of America, warned that the plan was especially concerning in light of recent mass layoffs of subject-matter experts. However, DOT General Counsel Gregory Zerzan defended the strategy, stating that “We don’t need the perfect rule on XYZ. We don’t even need a very good rule on XYZ…We want good enough.”
  • The Department of Homeland Security (DHS), the Department of Justice, the Department of State, and the Department of Veterans Affairs published updated AI usage inventories cataloging their respective AI use in 2025. Justin Hendrix at Tech Policy Press reported that the inventory showed a 40 percent increase in AI deployment at DHS in the second half of 2025, with at least 23 of the new applications used for facial recognition or biometric identification.
  • US Immigration and Customs Enforcement (ICE) posted a request for information from companies about how commercial big data and ad tech tools could support immigration enforcement. ICE stated that the posting was made for information-gathering purposes to determine the marketplace of commercially available solutions as the agency manages growing volumes of operational data. ICE has previously purchased several consumer location data products to assist in its investigations, including licenses from Venntel, a data broker that has sold sensitive consumer data without consent.
  • FBI Director Kash Patel launched an investigation into claims that Minnesota residents used encrypted messaging app Signal to share the whereabouts of Immigration and Customs Enforcement (ICE) agents. Online activist efforts to track ICE agents’ movements and identities escalated following ICE protests in Minneapolis. Protesters launched websites to track ICE operations, accessed law enforcement cameras, and published the personal information of thousands of ICE agents online. Following the Minneapolis protests, Meta blocked users on Facebook, Instagram, and Threads from sharing a database alleged to contain personal information about ICE agents, citing privacy and safety concerns for federal agents.
  • The US Defense Secretary announced that the Pentagon will deploy xAI’s chatbot Grok across both classified and unclassified military systems as part of a broader “AI acceleration” strategy to modernize defense technology and streamline innovation.
  • The US Department of Justice revealed that members of the Department of Government Efficiency (DOGE) improperly accessed Social Security Administration (SSA) data that had been restricted under a court order and shared sensitive information via unauthorized third-party servers.
  • The Office of Management and Budget released a memo rescinding previously required “burdensome software accounting processes” to allow agencies to maintain a software and hardware security policy as they see fit to their risk level and mission goals. The retraction reflects a broader move toward risk-based cybersecurity governance, giving agencies greater flexibility to prioritize protections for high-impact systems.
  • Trump administration officials disclosed that drafts of the administration’s National Cybersecurity Strategy include expanding the role of private cybersecurity firms in cyberwarfare to assist in offensive cyber operations against criminal and state-sponsored hackers. The proposal would reportedly broaden private sector involvement beyond defensive contracting to executing offensive online campaigns.
  • The Department of Justice (DOJ) announced the creation of an AI taskforce to challenge “excessive” state AI rules that hinder innovation. The taskforce, created in response to the Trump administration’s December 2025 executive order seeking to restrict state AI legislation, is slated to include representatives from the Offices of the Deputy and Associate Attorney General, the Justice Department's Civil Division and the Solicitor General's office.

In Congress:

  • Sens. Mark Warner (D-VA) and Tim Kaine (D-VA) sent a letter to the Inspector General of the Department of Homeland Security (DHS) citing concerns that DHS’s collection of sensitive personal data could be used to violate civil liberty protections and Fourth Amendment rights. The open letter listed DHS law violations and called for an internal audit into DHS’s data collection processes.

In civil society:

  • Consumer Federation of America, Electronic Privacy Information Center, and Fairplay published model legislation to address harms caused by chatbots. The People-First Chatbot Bill aims to establish strong rules and regulations centered around company liability and data security requirements, especially for minors.
  • Data & Society published a report on federal AI policy development, arguing that the deregulation of the industry and rapid ramp-up of use within the federal government will “prove disastrous to workers, communities, and the environment.”
  • The ACLU published a report detailing the recent boom in AI legislation and explored the utility of using AI and computational approaches to analyze AI legislation at both the state and federal level. They argued that inter- and intra- bill tracking and analysis using AI would help support future policy making and have stronger long-term analysis results. The report called for better standardization and uniformity for legislative documents across jurisdictions to increase efficiency.
  • The Leadership Conference on Civil and Human Rights released an open letter urging leadership at US tech companies to prioritize user experiences, safety, and civil rights in the development of their products, especially related to AI safeguards and fighting mis- and disinformation.
  • Vanderbilt Policy Accelerator released an AI neutrality regulatory framework that called for foundational model providers to “adhere to neutrality rules among their customers and potential customers.” The framework aimed to increase fairness and prevent unreasonable pricing, speed, or quality discrimination among foundational models and their users.
  • OpenAI and Common Sense Media announced a partnership on a joint ballot measure proposal in California aimed at enhancing protections for children interacting with AI chatbots and other online systems. The new proposal would require that companies identify child and adult users via OS-level "age bracket signals" for apps, institute safeguards for minors, ban child-targeted advertising, and restrict the collection and sharing of children’s data without parental consent.

In industry:

  • TikTok announced the establishment of TikTok USDS Joint Venture LLC, a new US-based entity assuming ownership of TikTok in the United States. TikTok’s new ownership structure complies with President Trump’s executive order approving the sale of TikTok’s US operations to an American investor group in response to an April 2024 federal law requiring divestiture of the US operations of TikTok from Chinese ownership. Under the agreement, the Chinese company ByteDance retains just under 20 percent of the US entity, while 45 percent of the company is owned by Oracle, Silver Lake, and MGX. Other investors, including non-Chinese ByteDance investors, will own the remaining 35 percent of the company. According to the announcement, TikTok USDS Joint Venture will be responsible for data protection, algorithm security, content moderation, and software assurance in the US. In response to the deal, Rep. John Moolenaar (R-MI), Chair of the House Select Committee on China, released a statement stating that the committee would conduct rigorous oversight to ensure that TikTok remains independent under the new structure.
  • Meta paused access to its AI-powered character features for any account with a teen birthday or identified as likely belonging to a teen through the company’s age prediction technology. The company announced plans to develop a version with stricter safety guardrails and parental controls. Meta announced that the new AI characters will have built-in parental controls and will aim to give age-appropriate responses.
  • OpenAI launched new age prediction tools on ChatGPT to better determine whether an account is likely owned by a minor, applying sensitive content protections to those determined to be under 18.
  • Meta blocked Facebook, Instagram, and Threads users from sharing a database containing private information of Immigration and Customs Enforcement (ICE) officers. Meta cited privacy violations that prohibit the sharing or soliciting of personally identifiable information, as online activism efforts to track ICE efforts increased following the mass civil unrest in Minnesota over expanding ICE operations.
  • A cryptocurrency super PAC group expanded its war chest to over $190 million, which is intended to push crypto-friendly legislation ahead of the midterm elections in the fall. The group includes super PACs Fairshake, Protect Progress and Defend American Jobs, as well as companies Coinbase, Andreessen Horowitz, and Ripple.
  • Amazon cut 16,000 corporate jobs in another round of layoffs following a first round of 14,000 job cuts in October 2025. CEO Andy Jassy stated that the cuts were made in anticipation of generative AI filling additional roles in the corporate workforce.
  • The Information Technology Industry Council (ITI) released a memo that calls for a uniform national AI regulatory framework, implementation of the Trump administration’s AI Action Plan and Genesis Mission, expanded AI procurement and workforce training across federal agencies, passage of a federal privacy standard, grid modernization to support data centers, expanded spectrum access, and renewed public-private information sharing on cybersecurity.
  • Google released an updated “Mayors AI Playbook” at the winter meeting for the US Conference of Mayors in Washington. The playbook includes a blueprint for using AI to analyze cyber-attack risks, automate zoning processes, allow for real-time language translation, and a variety of other tasks.

In the courts:

  • The Atlantic filed a federal antitrust lawsuit against Google and its parent company, Alphabet, accusing the companies of using their dominant digital advertising infrastructure to manipulate markets and siphon revenue from publishers and advertisers.
  • The Federal Trade Commission has appealed a federal court ruling that rejected its antitrust lawsuit against Meta Platforms. The FTC had accused Meta of illegally maintaining monopoly power through its acquisitions of Instagram and WhatsApp, arguing that Meta’s purchases of the two companies harmed competition.
  • A federal judge heard arguments from the Department of Homeland Security asking that Meta share the identities of individuals managing an anonymous Instagram account that has posted footage of ICE agents in Pennsylvania, arguing that such postings risk officer safety. The American Civil Liberties Union of Pennsylvania is representing the anonymous user and has argued that the request was a violation of the First Amendment.
  • Snap, parent company of social media app Snapchat, and TikTok reached independent undisclosed settlements in a case brought by an anonymous teenager, represented by Social Media Victims Law Center, who alleged the company’s social media apps were addictive and harmful to her mental health. Meta and YouTube, also named in the case, have not reached settlements and remain scheduled for trial in Los Angeles County Superior Court.
  • A US judge ruled Elon Musk’s lawsuit challenging OpenAI’s transition from a nonprofit to a for-profit structure can proceed to trial. Musk alleged that OpenAI violated its founding mission through its restructuring and is seeking unspecified monetary damages for his initial $38 million funding to the company.
  • A group of job applicants filed a federal lawsuit against Eightfold AI, claiming that the company’s AI-driven hiring tools should be subject to the Fair Credit Reporting Act (FCRA). Plaintiffs seek unspecified financial damages and court orders compelling Eightfold to comply with state and federal consumer reporting laws.
  • Google agreed to pay $68 million to settle the claims that its voice assistant illegally recorded users’ private conversations and shared those communications with third parties. The class-action case accused Google of “unlawful and intentional interception and recording of individuals’ confidential communications without their consent.”

Legislation Updates

The following bills made progress across the Senate and House in January:

  • DEFIANCE ACTS. 1837. Introduced by Sen. Dick Durbin (D-IL), the bill passed the Senate with unanimous consent.
  • Children and Teens’ Online Privacy Protection ActS. 836. Introduced by Sen. Edward Markey (D-MA). The bill was reported out of the Senate Committee on Commerce, Science, and Transportation.
  • AI-WISE ActH.R. 5784. Introduced by Rep. Hillary Scholten (D-MI). The bill passed the House and was referred to the Senate Committee on Small Business and Entrepreneurship.
  • Combating Online Predators ActH.R. 6719. Introduced by Rep. Laurel Lee (R-FL). The bill passed the House and was referred to the Senate Committee on the Judiciary.

The following bills were introduced in January:

  • Eliminating Bias in Algorithmic Systems Act — S. 3680 / H.R. 7110. Introduced by Sen. Edward Markey (D-MA) in the Senate and Rep. Summer Lee (D-PA) in the House, the bill would “require agencies that use, fund, or oversee algorithms to have an office of civil rights focused on bias, discrimination, and other harms of algorithms, and for other purposes.”
  • Leveraging Artificial Intelligence to Streamline the Code of Federal Regulations Act of 2026H.R. 7226. Introduced Rep. Blake Moore (R-UT), the bill would “streamline the Code of Federal Regulations (CFR) by using an artificial intelligence (AI) tool to identify redundant and outdated rules.” The Senate companion bill (S. 1110) was previously introduced by Sen. Jon Husted (R-OH).
  • Children Harmed by AI Technology (CHAT) ActH.R. 7218. Introduced Rep. Michael Lawler (R-NY) in the House, the bill would “require artificial intelligence chatbots to implement age verification measures and establish certain protections for minor users, and for other purposes.” The Senate companion bill (S. 2714) was previously introduced by Sen. Jon Husted (R-OH).
  • AI Overwatch ActH.R. 6875. Introduced by Rep. Brian Mast (R-FL), the bill would “require the Under Secretary of Commerce for Industry and Security to require a license for the export, reexport, or in-country transfer of certain integrated circuits, and for other purposes.”
  • TRAIN ActH.R. 7209. Introduced by Rep. Madeleine Dean (D-PA), the bill would “create an administrative subpoena process to assist copyright owners in determining which of their copyrighted works have been used in the training of artificial intelligence models.”
  • Data Center Transparency ActH.R. 6984. Introduced by Rep. Robert Menendez (D-NJ), the bill would “require reports on the effects of data centers on air quality and water quality, and on electricity consumption by data centers.”
  • Expanding AI Voices ActH.R. 7158. Introduced by Rep. Valerie Foushee (D-NC), the bill would “codify and expand the National Science Foundation (NSF)’s ExpandAI program.”
  • AI in Health Care Efficiency and Study ActH.R. 7064. Introduced by Resident Commissioner Pablo Hernandez (D-PR-At Large), the bill would “require the Secretary of Health and Human Services to conduct a study on strategies for the application of artificial intelligence technologies that can be used in the health care industry to improve administrative and clerical work and preserve the privacy and security of patient data, and for other purposes.”
  • Realigning Mobile Phone Biometrics for American Privacy Protection Act H.R. 7124. Introduced by Rep. Bennie Thompson (D-MS), the bill would “prohibit the use of facial recognition mobile phone applications outside ports of entry, and for other purposes.”
  • Make Elections Great Again ActH.R. 7300. Introduced by Rep. Bryant Steil (R-WI), the bill would “promote the integrity and improve the administration of elections for Federal office, and for other purposes.”
  • To require the Secretary of Commerce to conduct public awareness…H.R. 7151. Introduced by Rep. Nanette Barragan (D-CA), the bill would “require the Secretary of Commerce to conduct a public awareness and education campaign to provide information regarding the benefits of, risks relating to, and the prevalence of artificial intelligence in the daily lives of individuals in the United States, and for other purposes.”
  • To require the Secretary of State to conduct assessments…H.R. 7058. Introduced by Rep. Michael Baumgartner (R-WA), the bill would “require the Secretary of State to conduct assessments of risks posed to the United States by foreign adversaries who utilize generative artificial intelligence for malicious activities, and other purposes.”
  • To facilitate the export of United States artificial intelligence…H.R. 6996. Introduced by Rep. Randy Fine (R-FL), the bill would “facilitate the export of United States artificial intelligence systems, computing hardware, and standards globally.”
  • To study the impacts of artificial intelligence technology…H.R. 7294. Introduced by Rep. Robert Menendez (D-NJ), the bill would “study the impacts of artificial intelligence technology with respect to the security of telecommunications networks, and for other purposes.”

We welcome feedback on how this roundup could be most helpful in your work – please contact contributions@techpolicy.press with your thoughts.

Authors

Rachel Lau
Rachel Lau is a Project Manager at Freedman Consulting, LLC, where she assists project teams with research and strategic planning efforts. Her projects cover a range of issue areas, including technology, science, and healthcare policy.
Shirley Frame
Shirley Frame is an Associate at Freedman Consulting, LLC, where she assists project teams with strategic planning, research, and policy landscaping. Her projects cover a range of issues, including technology policy, criminal justice, education, and youth development.
Ben Lennett
Ben Lennett is the Managing Editor of Tech Policy Press. A writer and researcher focused on understanding the impact of social media and digital platforms on democracy, he has worked in various research and advocacy roles for the past decade, including as the policy director for the Open Technology ...

Related

Analysis
December 2025 US Tech Policy RoundupJanuary 7, 2026
Analysis
November 2025 US Tech Policy RoundupDecember 3, 2025
Analysis
October 2025 US Tech Policy RoundupOctober 31, 2025
Analysis
August 2025 US Tech Policy RoundupSeptember 2, 2025

Topics