February 2025 US Tech Policy Roundup
Rachel Lau, J.J. Tolentino, Ben Lennett / Mar 3, 2025Rachel Lau and J.J. Tolentino work with leading public interest foundations and nonprofits on technology policy issues at Freedman Consulting, LLC. Ben Lennett is the managing editor of Tech Policy Press.

President Donald Trump and Elon Musk (White House)
The end of February marked thirty-nine days of President Trump’s second term and included significant developments in US technology governance, highlighted by Elon Musk’s considerable influence in the new administration. Through his leadership at the Department of Government Efficiency (DOGE), Musk has been at the center of an aggressive restructuring of the federal government, including dismantling the United States Agency for International Development (USAID), canceling federal contracts and grants, and firing thousands of federal employees. DOGE has also pushed forward AI integration in federal operations, a move critics warn could lead to AI-driven governance rather than merely streamlining bureaucracy.
Meanwhile, executive actions such as the removal of AI strategy documents by the Department of Health and Human Services (HHS) and funding cuts to the AI Safety Institute (AISI) further underscored the administration’s turn away from responsible AI development. This policy shift aligned with the administration’s attacks against foreign tech regulations, including President Trump's tariff threats and Congress's scrutiny of European efforts to enforce the Digital Markets Act and Digital Services Act against US tech giants. The recent AI Action Summit in France also underscored a growing divergence in global AI policies, as the US and UK declined to sign a multilateral statement on AI governance.
Another major policy development came from President Trump’s Executive Order on federal agency oversight, which significantly curtails the independence of regulatory bodies such as the Federal Trade Commission (FTC), Federal Communications Commission (FCC), and Securities and Exchange Commission (SEC). The order requires agencies to submit regulatory proposals for White House review, consult the administration on strategic priorities, and align their legal interpretations with those of the president and attorney general. Many public interest groups argued that the move undermines independent agencies designed to function with minimal political influence, threatening their ability to regulate major technology firms and enforce antitrust laws.
Read on to learn more about February developments in US tech policy.
Elon Musk and the DOGE Takeover
Summary
Elon Musk's influence in the early months of the Trump administration, particularly through his leadership of DOGE, has markedly reshaped US tech policy towards prioritizing economic and political power over democratic governance. While Musk apparently holds no official government position, his leadership—reminiscent of his tumultuous restructuring of Twitter (now X)—appeared to be guiding DOGE’s aggressive approach to government reforms. This included a takeover of USAID, canceling federal contracts and grants, and dismissing tens of thousands of probationary employees.
Alongside these drastic reductions in government personnel and resources, DOGE accelerated efforts to integrate artificial intelligence (AI) into federal operations, allegedly using it to target supposed “waste, fraud, and abuse.” However, critics argued that this could establish a precedent for AI not just to streamline bureaucratic processes, but also to assume roles traditionally reserved for human decision-makers, seeding the ground for AI to not just automate paperwork but also democratic governance. While the Biden Administration took a cautious approach to AI in government, emphasizing risk management and regulatory oversight, Musk has advocated an "AI first" strategy, shifting decision-making authority to those who design and control these AI systems.
The big tech takeover in the federal government reflected broader changes in US tech policy, with increasingly global implications. The Trump Administration signaled an aggressive stance against foreign regulation of American tech companies, threatening tariffs in response to European regulatory measures. Rep. Jim Jordan (R-OH) also sent letters to EU Commissioners expressing concerns over the Digital Markets Act and the Digital Services Act, laws that have spurred active enforcement cases against US tech companies, including Google, X, Facebook, and Apple.
These policy shifts also occurred in the context of a rapidly changing AI governance landscape, with competing national and international approaches. At the recent AI Action Summit in France, held in February 2025, the US and the UK declined to sign a final communique, where 60 countries agreed to several voluntary commitments to make AI more inclusive and sustainable. Meanwhile, in the US, President Trump touted a vastly different AI strategy with his administration’s AI Executive Order and other initiatives, such as the Stargate Project, a public-private partnership announced in January 2025. As the world navigates the next phase of AI governance, the contrast between the US and European approaches may set the stage for competing global AI ecosystems. If the DOGE approach wins out, the world risks descending into an era where AI and algorithmic decision-making place unprecedented power in the hands of a few tech elites.
What We’re Reading
- Emily Tavoulereas, “DOGE Understands Something the US Policy Establishment Does Not: Technology is the Spinal Cord of Government,” Tech Policy Press.
- “A National Heist? Evaluating Elon Musk’s March Through Washington,” Tech Policy Press.
- Eryk Salvaggio, “Anatomy of an AI Coup,” Tech Policy Press.
Trump Executive Order Limits Federal Agencies Independence, With Implications for Tech Policy
Summary
On February 18, President Trump signed an executive order titled “Ensuring Accountability for All Agencies,” expanding the White House’s authority over federal regulatory agencies. The executive order requires independent federal agencies to submit their draft regulations to the White House for review, consult with the White House on their strategic priorities, and permit the White House to set performance standards. It further undermines agencies’ legal authority by stating that no executive branch employee may “advance an interpretation of the law as the position of the United States that contravenes the President or the Attorney General’s opinion on a matter of law.”
The EO has significant implications for the ability of independent federal agencies to engage on tech policy. The White House fact sheet specifically named the Federal Trade Commission (FTC), Federal Communications Commission (FCC), and the Securities and Exchange Commission (SEC) as agencies that have “exercised enormous power over the American people without Presidential oversight.” Despite the White House’s claims, these agencies were established by Congress “specifically to act independently, or semi-independently” from the President, with their leaders often serving terms beyond a single presidency, further shielding them from political pressure. Each of these agencies has played a critical role in shaping the regulatory landscape for tech, protecting consumers, enforcing antitrust, and overseeing critical infrastructure. While it is unclear how the EO will fully impact independent agencies, it is likely to attract significant legal challenges.
Civil society organizations voiced significant opposition to the EO, citing concerns related to the constitutionality of the President’s actions. Free Press Co-CEO, Craig Aaron, released a statement in opposition to President Trump’s EO, claiming that independent agencies are necessary to “tackle highly complex and technical issues that affect hundreds of millions of people and multibillion-dollar corporations without partisan meddling.” Aaron also called on Congress to “strenuously oppose” the EO and for courts to “reject this illegal power grab.” The Center for Democracy & Technology published a press release discussing the President’s efforts to gut agencies that protect American consumers and workers and justifying the need for independent agencies such as the FCC and FTC who employ “expert technologists, lawyers, and engineers” to make decisions “supported by evidentiary record … rather than political favoritism.” John Bergmayer, Legal Director of Public Knowledge, released a statement criticizing the EO and acknowledging the importance of “expert agencies that are free from day-to-day political control” and independent agencies as “simply good policy.”
What We’re Reading
- Alton Wang, “Explainer: Trump’s Executive Order on Controlling Independent Agencies,” Common Cause.
- “Trump’s Executive Order Aims to Illegally Undermine Federal Independent Agencies, Shield Big Corporations from Accountability,” Public Citizen.
Tech TidBits & Bytes
Tech TidBits & Bytes aims to provide short updates on tech policy happenings across the executive branch and agencies, Congress, civil society, industry, international governance, and courts.
In Congress:
- Democrats on the Senate Committee on Commerce, Science and Transportation questioned Mark Meador, Trump’s FTC nominee, on the independence of the agency, particularly as the FTC considers five antitrust cases against major technology companies in the coming months.
- Sen. Todd Young (R-IN) published a blog post outlining a “tech power playbook for Donald Trump 2.0,” promoting fostering domestic semiconductor production, strengthening semiconductor restrictions to foreign adversaries, and cultivating diplomats’ knowledge and focus on technology policy.
- Reps. Brett Guthrie (R-KY) and John Joyce (R-PA), leaders of the House Committee on Energy and Commerce, announced the establishment of a comprehensive data privacy working group.
In the executive branch and agencies:
- The Trump Administration released a request for information on the development of the administration’s AI Action Plan, which seeks to “enhance America’s position as an AI powerhouse and prevent unnecessarily burdensome requirements from hindering private sector innovation.” The deadline to submit comments is March 15, 2025.
- Elizabeth Kelly, the US AI Safety Institute’s first director, stepped down after overseeing the institute’s work for a year at the beginning of February. Later in the month, funding cuts by the Trump Administration threatened to impact up to 500 staffers at the US AI Safety Institute (AISI) and other National Institute of Standards and Technology (NIST) offices.
- The Department of Health and Human Services (HHS) removed its AI strategic plan from its website less than two months after the document was originally published.
- The Trump Administration published a memorandum on digital services taxes, saying that they will impose tariffs and respond in kind if a foreign government “imposes a fine, penalty, tax, or other burden that is discriminatory, disproportionate, or designed to transfer significant funds or intellectual property from American companies to the foreign government or the foreign government’s favored domestic entities.”
In industry:
- Google updated its artificial intelligence ethical guidelines, removing a section pledging not to pursue applications for “weapons, surveillance, technologies that ‘cause or are likely to cause overall harm,’ and use cases contravening principles of international law and human rights.” In contrast, the updates also added language pledging to continue utilizing human oversight and accepting feedback to ensure that their products follow “widely accepted principles of international law and human rights.”
- Scale AI and AISI announced a partnership developing testing methodology for frontier AI models.
- YouTube announced its “big bets” for 2025, including building guardrails to protect users and producers on the platform across their AI tools, including new technology to “help individuals detect and control how AI is used to depict them on YouTube” and using machine learning to estimate a user’s age to institute child safety protections.
- Apple released a memo on their new kids’ online safety measures, which enhance parental controls over childrens’ online accounts and screen time, implements new tools for developers to incorporate child safety features into their products, and incorporates age-related features in their App Store.
In civil society:
- Americans for Responsible Innovation sent a letter to the Committee on Health, Education, Labor, and Pensions urging Members to question Labor Secretary Nominee Rep. Lori Chavez-DeRemer (R-OR) on her plans to address the impact of AI on American workers.
- Fairplay published a report on the impact of social gaming monetization on young people and recommendations for policies to protect kids online.
In the courts:
- NetChoice filed a lawsuit seeking to overturn the State of Maryland’s Kids Code (HB 603/SB 571), arguing that the Code’s Data Protection Impact Assessment requirements violate the First Amendment, among other constitutional violations.
- Chegg, Inc. filed a lawsuit against Google, alleging the company is a monopoly in general search, pointing to Google’s use of artificial intelligence (AI) “to summarize publisher content… in ‘AI Overviews,’ and training of its large-language AI models on content from websites that rely on participating in its search index.
Legislation Updates
The following bills made progress across the House and Senate in February:
- TAKE IT DOWN Act - S.146. Introduced by Sen. Ted Cruz (T-TX) in January, the bill passed the Senate without amendment by unanimous consent in February.
- Shielding Children's Retinas from Egregious Exposure on the Net (SCREEN) Act - S.3314 / H. R. 6429. Sens. Mike Lee (R-UT), John Curtis (R-UT) and Jim Banks (R-IN), and Rep. Mary Miller (R-IL) introduced companion bills requiring all commercial pornographic websites to adopt age verification technology to ensure a child cannot access its pornographic content.
- No DeepSeek on Government Devices Act - S.# / H.R.#. Introduced by Sens. Jacky Rosen (D-NV), Pete Ricketts (R-NE) and Jon Husted (R-OH) in the Senate and Reps. Josh Gottheimer (D-NJ) and Darin LaHood (R-IL) in the House , the bill would prohibit the use of DeepSeek for federal employees on government-issued devices.
- Decoupling America’s Artificial Intelligence Capabilities from China Act - S. 321. Introduced by Sen. Hawley (R-MO), the bill would “prohibit American companies from sharing advanced artificial intelligence models with China and prevent entities linked to the Chinese Communist Party from accessing U.S. AI technologies.”
- Preventing Algorithmic Collusion Act of 2025 - S.232. Introduced by Sen. Amy Klobuchar (D-MN), the bill would “prohibit the use of pricing algorithms that facilitate collusion through nonpublic competitor data, updating antitrust laws to address algorithmic collusion that does not involve explicit agreements.”
- A bill to require the Secretary of Health and Human Services... - S.501. Introduced by Sen. Ted Budd (R-NC), the bill would “require the Secretary of Health and Human Services to develop a strategy for public health preparedness and response to artificial intelligence threats, and for other purposes.”
- A bill to improve the communications between social media platforms and law enforcement agencies… - S.626. Introduced by Sen. Rick Scott (R-FL), the bill seeks “to improve the communications between social media platforms and law enforcement agencies, to establish the Federal Trade Commission Platform Safety Advisory Committee, and for other purposes.”
- A bill to prohibit disinformation… - S.589. Introduced by Sen. Elizabeth Warren (D-MA), the bill would “prohibit disinformation in the advertising of abortion services, and for other purposes.”
- To amend title 18, United States Code… - H.R.1283. Introduced by Rep. Gus Bilirakis (R-FL), the bill would “prohibit child pornography produced using artificial intelligence.”
- To amend the Communications Act of 1934… - H.R.1027. Introduced by Rep. Eric Sorensen (D-IL), the bill would “require disclosures with respect to robocalls using artificial intelligence and to provide for enhanced penalties for certain violations involving artificial intelligence voice or text message impersonation, and for other purposes.”
- To prohibit the obligation or expenditure of Federal funds...- H.R.1233. Introduced by Rep. Thomas Massie (R-KY), the bill would outlaw the “obligation or expenditure of Federal funds for disinformation research grants, and for other purposes.”
We welcome feedback on how this roundup could be most helpful in your work – please contact contributions@techpolicy.press with your thoughts.
Authors


