August 2025 US Tech Policy Roundup
Rachel Lau, J.J. Tolentino, Ben Lennett / Sep 2, 2025Rachel Lau and J.J. Tolentino work with leading public interest foundations and nonprofits on technology policy issues at Freedman Consulting, LLC. Ben Lennett is the managing editor of Tech Policy Press.

President Donald Trump signs Executive Orders at the White House AI Summit at Andrew W. Mellon Auditorium in Washington, D.C., July 23, 2025. (Official White House Photo by Joyce N. Boghosian)
This month saw the Trump Administration accelerate its push of AI into the federal government, through major procurement initiatives to accelerate adoption of the technology across agencies. The General Services Administration (GSA) added AI products from Anthropic, Google, Meta, and OpenAI to its purchasing schedule and launched USAi, a platform for agencies to test AI systems before purchase. Leading AI companies rushed to secure government adoption by offering deep discounts on their products. Administration officials hailed the changes as a key step in advancing the President’s AI Action Plan, while critics warned that it risked ceding public oversight to corporate-controlled algorithms without sufficient safeguards.
At the same time, the risks of rapid AI deployment were highlighted by two controversies: a leaked policy document from Meta that appeared to endorse chatbots engaging in highly inappropriate conversations with children and a lawsuit against OpenAI alleging its systems encouraged a teenager’s suicide. Both incidents fueled broader concerns that the industry is racing ahead of the public and policymakers, leaving vulnerable users exposed to unsafe design choices. Beyond debates concerning AI procurement and safety, policymakers in Congress introduced bills on AI in health and grocery prices, protecting the privacy of government data, and foreign influence in media.
Read on to learn more about August developments in US tech policy.
The federal government ramps up AI procurement efforts
Summary
The US federal government made significant strides in its AI procurement efforts as agencies and departments sought to directly support Trump’s AI Action Plan and accelerate the use of AI across the federal government. The GSA announced that leading US AI companies Anthropic, Google, Meta, and OpenAI had their AI products added to its Multiple Award Schedule (MAS), providing a streamlined path for these companies to offer their AI products in the federal marketplace. The GSA also announced that it is rolling out USAi, a new government-wide tool to allow federal agencies to test various AI models before procuring a specific model on the normal federal marketplace.
Shortly after the GSA’s statement was released, OpenAI announced a partnership with the US government to provide its AI models and tools to federal agencies for $1 per agency over the next year, as part of the GSA’s expanded AI purchasing list. The move followed months of ongoing engagement between OpenAI executives and federal officials. In response, Anthropic stated that it will also offer its Claude AI chatbot to all three branches of the federal government for $1 a year in total. Anthropic’s offer followed a recent statement confirming that Claude had been added to the GSA’s schedule, “making it easier for all U.S. federal government departments and agencies to quickly access Claude, with pre-negotiated pricing and terms that comply with federal acquisition regulations.” Following suit, Google struck a deal with the GSA to make Gemini cost less than $0.50 for the federal government. This announcement came days after Google announced the launch of “Gemini for Government," a new government-focused AI product suite.
As part of GSA’s announcement extending its MAS to US AI companies, Michael Rigas, Acting GSA Administrator, stated that by “making these cutting-edge AI solutions available to federal agencies, we’re leveraging the private sector’s innovation to transform every facet of government operations.” Federal Acquisition Service Commissioner Josh Gruenbaum emphasized that the GSA is proud to “advance the President’s AI Action Plan,” and that the GSA is focused on procuring modes that “prioritize truthfulness, accuracy, transparency, and freedom from ideological bias.” OpenAI CEO Sam Altman said he is proud that OpenAI can “make ChatGPT available across the federal government, helping public servants deliver for the American people." Additionally, Commissioner of the Federal Acquisition Services Josh Gruenbaum reportedly sent an email to the GSA instructing the agency to add xAI’s Grok to their expanded schedule to support federal AI procurement.
In response to the GSA’s announcement on USAi and its partnership with AI companies, J.B. Branch, Big Tech Accountability Advocate with Public Citizen, issued a statement criticizing the administration’s efforts, calling it a “contradictory and dangerous path” that could replace “public-sector judgment with corporate-controlled algorithms while sidelining competition and accountability.” Branch also urged the Trump administration to pause its GSA expansion “until enforceable guardrails are put in place to ensure transparency, fairness, worker protections, and open market competition.”
What we’re reading
- Miranda Nazzaro, “Anthropic, Google and OpenAI land GSA contract for governmentwide use,” FedScoop.
- Nina-Simone Edwards, “Discount AI Brings Premium Risks To Public Procurement,” Tech Policy Press.
- Suresh Venkatasubramanian, Costa Samaras, and Cole Donovan, “Trump’s AI Strategy Is At War With Itself,” Tech Policy Press.
Meta document leak and OpenAI lawsuit highlight AI chatbots’ potential harms to children
Summary
This month highlighted concerns about the pace of Silicon Valley’s rapid development and deployment of AI, particularly regarding potential risks for children and other vulnerable groups. The release of AI chatbots has drawn attention to safety gaps and uncertainties around internal policies, prompting ongoing discussions about regulation and the ethical responsibilities of technology companies. With no clear industry standards or rules in place, the current environment for AI chatbots is often described as a “wild west,” where questions of responsibility and accountability remain unresolved.
Two incidents illustrate the stakes. On August 14, Reuters tech reporter Jeff Horwitz published a report highlighting an internal Meta policy document, “Content Risk Standards,” that appeared to endorse its chatbots engaging with children "in conversations that are romantic or sensual" and generating false medical information, among other concerning behaviors. In response, one critic described Meta’s chatbot policy as a "massive unauthorized social experiment" on the public, where companies act like "authoritarian mad scientists" prioritizing monetization and engagement over user safety. Following Horwitz's reporting, Meta stated that they revised the policy, striking out the language permitting romantic or sensual conversations with children, with a spokesperson claiming it was a mistake never intended to be authorized by higher-ups.
Less than two weeks later, a lawsuit was filed against OpenAI. The lawsuit brought by Matthew and Maria Raine, the parents of 16-year-old Adam Raine, who died by suicide in April, alleged that OpenAI's ChatGPT-4o product cultivated a "sycophantic, psychological dependence" in Adam and subsequently provided explicit instructions and encouragement for his suicide. According to the complaint, chat logs showed the chatbot urged Adam to conceal his suicidal ideation, failed to flag repeated references to self-harm, and even provided explicit instructions for carrying out his plan. The suit argues that these failures were the “predictable result of deliberate design choices” by a company that prioritized speed and market dominance over safety. The Raine family is seeking damages and injunctive relief to mandate enhanced safety measures, age verification, and parental controls for OpenAI's products.
Taken together, these developments are likely to intensify policy discussions about how best to define responsibility, establish safeguards, and address risks for vulnerable populations.
What we’re reading
- Jeff Horwitz, “Meta’s AI rules have let bots hold ‘sensual’ chats with kids, offer false medical info,” Reuters.
- Justin Hendrix, “A Conversation with Jeff Horwitz on Meta's Flawed Rules for AI Chatbots,” Tech Policy Press.
- Justin Hendrix, “Experts React to Reuters Reports on Meta's AI Chatbot Policies,” Tech Policy Press.
- Mark MacCarthy, “AI Companies Should be Liable for the Illegal Conduct of AI Chatbots,” Tech Policy Press.
- Justin Hendrix, “Breaking Down the Lawsuit Against OpenAI Over Teen's Suicide,” Tech Policy Press.
Tech tidbits & bytes
Tech TidBits & Bytes aims to provide short updates on tech policy happenings across the executive branch and agencies, Congress, civil society, industry, and courts.
In the executive branch and agencies:
- President Trump signed an executive order establishing “America by Design,” a national initiative “breathing new life into the design of sites where people interface with their Government.” The EO established a National Design Studio led by a Chief Design Officer with the goal of improving the presentation and usability of federal services.
- US Transportation Secretary Sean Duffy warned US airlines against using AI to set seat pricing, stating that they would “engage very strongly if any company tries to use AI to individually price their seating” and “investigate” any airlines that engage in surveillance pricing.
- The White House released a report on cryptocurrency in support of the ability of individuals to self-custody digital assets and the development of dollar-backed stablecoins. The report also opposes the creation of a Central Bank Digital Currency (CBDC).
- President Trump, in a Truth Social post, threatened tariffs on countries “with digital taxes, legislation, rules, or regulations.” It marked the latest escalation in tensions with the European Union, which also saw the Trump Administration announce it is considering potential sanctions on individual EU officials responsible for implementing the bloc’s Digital Services Act.
In Congress:
- Sen. Ted Cruz (R-TX) stated that he would revive a 10-year moratorium on state and local AI laws in upcoming AI legislation.
- Sen. Maggie Hassan (D-NH) sent a letter to 6sense, a data broker company, seeking answers about their opt-out options and expressing concerns about data privacy and consumer protections related to data brokers and other online providers.
In civil society:
- Public Citizen published a report finding that the Trump administration has “withdrawn or halted enforcement actions against 165 corporations of all types,” including “one third of targeted investigations into suspected misconduct and enforcement actions against technology corporations.” Financial technology companies facing investigation by the Consumer Financial Protection Bureau (CFPB) “disproportionately benefited, with eleven withdrawn or halted enforcement actions (seven withdrawn, four halted).”
- Amnesty International launched a new briefing calling on governments to put limits on the power of the five biggest tech companies – Alphabet, Meta, Microsoft, Amazon, and Apple – through stronger antitrust enforcement.
- Americans for Responsible Innovation released a report on policy approaches to AI agent harms, including leveraging tort liability, establishing incentives for proactive governance, creating a non-legal system of compensation, or other hybrid approaches.
In industry:
- Perplexity AI, an AI start-up backed by Jeff Bezos and Nvidia, among others, made a $34.5 billion bid to buy Google Chrome as Google faces an antitrust lawsuit that may force it to divest the search platform.
- YouTube announced a new system to use AI to “interpret a variety of signals that help us to determine whether a user is over or under 18,” such as types of content watched and searched or the age of the account. When the AI system suspects a user is under 18, it will apply age restrictions, including “disabling personalized advertising, turning on digital wellbeing tools, and adding safeguards to recommendations.” Misidentified users will verify their age using a credit card or government ID.
In the courts:
- Elon Musk sued Apple and OpenAI, claiming that the two companies are illegally conspiring to maintain their monopolies by suppressing xAI’s products on the Apple App Store while promoting ChatGPT.
Legislation updates
The following bills were introduced across the House and Senate in August:
- Healthcare Enhancement And Learning Through Harnessing Artificial Intelligence Act (HEALTH AI Act) – H.R. 5045. Introduced by Rep. Ted Lieu (D-CA), the bill would “direct the National Institutes of Health (NIH) to award grants to universities, nonprofits, and government agencies to better understand how generative AI can be used to improve outcomes in the health care sector.”
- Security and Accountability For Everyone Act of 2025 (SAFE Act of 2025) – H.R. 5028. Introduced by Rep. Dave Min (D-CA), the bill would “empower individual Americans and State Attorney Generals to take legal action against violations of the Privacy Act.”
- Stop Price Gouging in Grocery Stores Act of 2025 – H.R. 4966. Introduced by Rep. Rashida Tlaib (D-MI), the bill would “prohibit retail food stores from price gouging and engaging in surveillance-based price setting practices, and for other purposes.”
- No Advanced Chips for the CCP Act of 2025 – H.R. 5022. Introduced by Rep. Raja Krishnamoorthi (D-IL), the bill would “require congressional approval for the export of advanced artificial intelligence semiconductors to the People’s Republic of China, and for other purposes.”
- Stop Foreign Propaganda Act – H.R. 4923. Introduced by Rep. Tony Gonzales (R-TX), the bill would “impose sanctions on persons who knowingly provide content or media services to sanctioned foreign propaganda outlets, and for other purposes.”
- Foreign Robocall Elimination Act – S. 2666. Introduced by Sen. Ted Budd (R-NC), the bill would “create an interagency task force to evaluate foreign robocalls and how best to combat them, with the goal of increasing international cooperation to reduce illegal robocalls.”
We welcome feedback on how this roundup could be most helpful in your work – please contact contributions@techpolicy.press with your thoughts.
Authors


