Expert Predictions on What’s at Stake in AI Policy in 2026
J.B. Branch, Ilana Beller / Jan 6, 2026J.B. Branch is the Big Tech accountability advocate for Public Citizen’s Congress Watch division, and Ilana Beller leads Public Citizen’s state legislative work relating to artificial intelligence.

US President Donald Trump displays a signed executive order as (L-R) Sen. Ted Cruz (R-TX), Commerce Secretary Howard Lutnick and White House AI and crypto czar David Sacks look on in the Oval Office of the White House on December 11, 2025 in Washington, DC. (Photo by Alex Wong/Getty Images)
For years, debates over the regulation of artificial intelligence required a degree of speculation about its potential harms. But even as the technology continues to evolve, it is clear that by the end of 2025 AI ceased to be an “emerging” policy issue. Real world harms are accumulating rapidly, putting pressure on lawmakers to answer the concerns of their constituents. The stage is set for important political and legal battles that will play out in 2026 and will define who controls AI, who bears the costs of its harms, and whether democratic governments and regulators can keep pace.
Indeed, some of 2025’s most revealing moments seemed like scripts from the dystopian science fiction series Black Mirror. Leaked Meta documents revealed that executives signed off on allowing AI to have “sensual” conversations with children. In Baltimore, an AI-powered security system mistook a student’s bag of Doritos for a gun, prompting school administrators to summon the police. An AI-enabled teddy bear was yanked from store shelves after reports that it discussed sexual topics and encouraged children to harm their parents. Psychiatrists across the United States increasingly warned about the growing problem of AI “psychosis,” even as as OpenAI was sued for allegedly coaching a teen to commit suicide.
Last year, AI-generated synthetic media became even more prevalent in the political arena, as the tools to produce it became easier to use. President Donald Trump openly shared AI-generated images and videos to ridicule opponents. In Virginia, a congressional candidate received serious pushback for debating an AI-generated avatar of his opponent. Senator Amy Klobuchar (D-MN) confronted the reality of AI impersonation and voice fraud firsthand when a deepfake of her spewing vulgarities about actress Sydney Sweeney appeared, while former New York governor and losing New York City mayoral candidate Andrew Cuomo deployed the technology against his opponent, Zohran Mamdani.
While Congress failed to take action on AI in 2025—apart from the passage of the TAKE IT DOWN Act, which addresses nonconsensual intimate images—state lawmakers were busy passing bipartisan laws aimed at election deepfakes, algorithmic discrimination, consumer scams, and the use of AI in sensitive domains like health care and education. To counter regulation, Big Tech poured hundreds of millions of dollars into newly formed super PACs that will target lawmakers who advance AI laws, ensuring policy will likely be shaped as much by campaign finance as by technical expertise. Republicans, who received nearly 75 percent of recent tech-backed political donations, tried not once, not twice, but three times to pass an AI moratorium casting state consumer protections as threats to innovation and national security. When those efforts failed, Trump issued an executive order directing the Department of Justice to sue states over AI laws deemed “burdensome.”
All of these phenomena played out against a backdrop of nonstop AI hype and raging markets causing some—including OpenAI founder and CEO Sam Altman himself—to warn about an AI bubble. Copilots, companions, surveillance tools, and novelty chatbot-powered gadgets saturate the market while venture capital continues to flood into AI startups. This scenario leaves many questions looming in 2026. Is this the year the AI bubble bursts, or will AI company valuations continue to balloon? What will courts, specifically the US Supreme Court, make of a possible showdown between the DOJ and states protecting consumers? Will Congress pursue AI regulations that 97 percent of the American public support?
To get at these questions and more, we invited predictions from multiple policy experts. Whatever happens in tech policy and in politics more broadly in 2026, one thing is certain: AI will be at the center of it all.
David Atkinson—Post Doctoral Fellow, Georgetown University Law Center
While I believe AI—specifically generative AI—has the tremendous potential to be extraordinarily pro-social, I expect AI developments in 2026 to be similar to 2025, with federal policy getting worse before it gets better. I predict that generative AI will be no more democratized in the next 12 months than it was in the preceding 12 months, that companies will continue to hill-climb on benchmarks rather than make fundamental leaps in generalized reasoning or capabilities, that a handful of Big Tech companies plus Anthropic, OpenAI, and xAI will continue to dominate AI-centered economic and political power, and that despite the hand wringing about an “AI race” with China, the US will comfortably retain a similar leading position to where it stands now.
I also expect at least one more multi-billion dollar payout for copyright infringement, companies leaning more into companionship bots rather than away in a desperate attempt to generate more revenue, and that there will be no comprehensive law or policy to address economic harms related to scraping websites and reducing traffic to those sites or regarding the negative environmental impact of data center construction and an ever-increasing number of tokens generated by chatbots and “agents.”
Adam Billen—Vice President of Public Policy, EncodeAI
The battle over states’ rights on AI will remain the highest profile fight in 2026. As the Trump administration attacks state AI laws under the Executive Order, small red and blue states will face a chilling effect while California and New York push ahead. Industry will rally around White House AI policy czar David Sacks’ proposal on preemption, which will likely contain a twisted version of California’s SB 53 with “carveouts” that won’t actually protect vulnerable groups like children.
There will be lots of hand wringing over proposed legislation, but massive settlements and judgments in child safety cases along with the application of existing law by state attorneys general will have a far larger impact on industry. It is in this context that industry will wield an influx of cash to push preemption and other methods to immunize themselves from liability in the courts. The overwhelming popularity of AI safety protections combined with the counter-super PAC Public First will make their job more difficult than they expect.
In terms of the technology itself, everyday Americans will most notice more capable AI agents, but the frontier of capabilities will continue to advance in knowledge domains like cybersecurity and autonomous hacking that most people will not track.
Doug Calidas—Senior Vice President of Government Affairs, Americans for Responsible Innovation
As we head into 2026, expect to see an acceleration in AI technology and an acceleration in its impacts.
On the tech side, we expect models to continue scaling rapidly, with sharper gains in reasoning, autonomy, and multimodal performance. But as the technology grows, the longer its shadow becomes. We’ve already seen the first stories of AI encouraging teen self harm and the initial indications of AI’s impact on jobs for young workers. Without real safeguards, these impacts are going to become firsthand experiences for more and more voters.
On policy, expect continued debates over export controls, federal preemption, kids safety online, and how AI is reshaping the workforce.
Amina Fazlullah—Head of Tech Policy Advocacy, Common Sense Media
In 2026, AI policy will be defined by two competing forces: the fallout of the recent Trump AI Executive Order seeking to discourage state AI laws versus the continued bi-partisan push to implement meaningful protections for young AI users and other vulnerable populations. Importantly, the EO does not, on its own, override state laws, and many states are expected to keep moving ahead, with only a handful slowing down to weigh the risk of legal challenges and potential BEAD funding impacts.
In 2025, state bills focused on existential risk and establishing basic guardrails for risky AI products for all users. While critical first steps, the growing number of lawsuits have made clear that lawmakers need to go further to actually prevent harms. Children have died – a Senate Judiciary hearing featured the parents of kids’ lost or irreparably harmed by AI chatbots.
AI companies continue to prioritize engagement over safety, even attempting to lock-in insufficient guardrails in California through a new ballot initiative. However, new lawsuits from victims will accelerate the debate toward proactive measures. Protecting kids in the AI era requires limiting access to unsafe chatbots, strengthening data privacy, ensuring accountability for harms, and incorporating independent safety audits. The rapid growth of AI has come with steep costs; meeting the moment requires lawmakers to match that pace with strong protections to keep us safe.
Asad Ramzanali—Director of Artificial Intelligence and Tech Policy, Vanderbilt Policy Accelerator
As we enter a mid-term election year, here's what I anticipate: For AI products consumers use (e.g., chatbots), technology will advance in the way iPhone updates do – marginal and unremarkable. Meanwhile, the rhetoric around AI's necessity to national security and geopolitics will crank up, even as LLMs and generative forms of AI reach the edge of utility. On balance, the AI bubble will continue inflating but won't burst.
In DC, Congress will dust off some part of the stack of stalled tech bills – competition, kids’ safety, privacy, Section 230 – and fold in chatbot-specific proposals. A few will clear committee, maybe even a chamber. A bill that feels responsive to a crisis gets enacted into law. Beyond DC, data centers will appropriately get blamed for rising electricity rates and water issues, turning local frustration into a sharper electoral issue and extending dynamics already visible in the 2025 special elections.
Marc Rotenberg—Founder, Center for AI and Digital Policy (CAIDP)
In 2026, the story of AI will be less about dazzling new capabilities and more about whether democratic institutions can meaningfully steer the technology. We should expect further concentration of power around a small number of firms that control foundation models, intensifying concerns about opacity, bargaining power, and systemic risk. At the same time, AI will move deeper into the infrastructure of everyday life—credit and housing decisions, workplace management, education, border control, and elections—often in ways that are invisible to those affected. The key risk is not only “runaway” AI, but the quiet normalization of systems that undermine human dignity, due process, and equal protection because no one is clearly accountable when things go wrong.
On the policy side, 2026 will be the year of enforcement and “red lines,” not just new declarations. The central question will be whether governments are willing to prohibit certain applications—such as biometric mass surveillance and autonomous weapons—or whether they will settle for voluntary codes of conduct. We will also see whether emerging frameworks such as the EU AI Act and the new Council of Europe AI convention inspire parallel efforts elsewhere or give way to regulatory arbitrage. For me, one of the most important indicators is whether countries establish genuinely independent oversight bodies with the authority, expertise, and resources to scrutinize powerful AI systems. If we make real progress on institutional safeguards and clear prohibitions, 2026 could be remembered as the year we began to align AI development with democratic values rather than merely admire its technical prowess.
Bruce Schneier—Fellow and Lecturer, Harvard Kennedy School
One important trend is the declining cost of AI. This comes from a combination of decreased model size and increased training and operating efficiency. Models like the Chinese DeepSeek and the Swiss Apertus have been trained with orders of magnitude less compute than the monster US tech giant models. And they can run on people's phones. As the world moves away from these "do everything" models to actually useful specialized models, the center of AI power will shift from those tech giants to the broader ecosystem. And that's a good thing for society.
Ridhi Shetty—Senior Policy Counsel, Privacy and Data Project, Center for Democracy and Technology (CDT)
AI has existed and evolved for decades, but what has also recently evolved is how average people interact with it and react to its impact on their lives. People are being encouraged, even forced, to use AI for everything from their workflows and research to their interpersonal relationships, feeding into the persistent narrative that AI’s ubiquity is inevitable and its advancement is therefore an urgent priority. Meanwhile, fashioning AI into an everyday tool for the masses has detracted from efforts to address AI’s longstanding role in decisions about who can access basic needs like income, housing, and healthcare, and whose fundamental freedoms are subverted.
We’re likely to see these issues escalate in 2026 with continued promotion of two ideas: AI’s purported “democratization” and the false choice between regulating or innovating. However, the public has already indicated skepticism about AI’s ROI and general value and concerns over its impact on economic stability, civil liberties, and misinformation. Congress has done little to assuage these doubts. So we can expect continued distrust in AI’s performance and in institutions that sell or deploy AI absent steps by companies and the government to promote responsible development of AI that protects people’s rights and interests.
Tyson Slocum—Director of Energy Program, Public Citizen
AI is in the midst of the largest speculative financial bubble in history, fed by circular funding tie-ups between chipmakers, cloud computing and AI developers, forcing the likes of Nvidia to defend against allegations of Enron-style accounting. AI companies are both overvalued and highly concentrated, with the seven largest tech companies accounting for one-third of the S&P 500 index, leading OpenAI to muse that it deserves a federal bailout when the house of cards collapses. Millions of GPUs housed in AI data centers gobble unfathomable quantities of electricity, with projections to add the existing power generation capacity of Japan in the next 5 years to accommodate data centers and other load growth.
This is absurd. Big Tech’s frenzied push to force American communities to accept data centers—backed by Trump’s threat to use emergency powers to override state and local objections to prioritize fossil fuel infrastructure—is threatening households with even heftier utility bills. The feds must cease efforts to fast-track data centers and associated gas infrastructure, and instead allow states and localities to lead. Because when the AI bubble pops sooner rather than later, America's ratepayers cannot afford to be stuck with these massive stranded assets.
Helen Toner—Interim Executive Director, Center for Security and Emerging Technology (CSET)
One thing I'm expecting to see in 2026 is a continuation of a trend from 2025: The push to find better ways of measuring AI progress. It's becoming ever clearer that the traditional question-and-answer "benchmark" datasets are no longer fit for purpose, and 2025 yielded new measurement approaches including METR's famous "time horizon" metric and OpenAI's "GDPval," among others.
I'm also watching for any progress on the idea of "continual learning," which has become a hot topic among AI developers recently. Continual learning is about how we could build AI systems that can accumulate new knowledge and skills over time, the way humans do. Now that it's in the crosshairs of top researchers, I'm curious to see if 2026 brings any meaningful progress on this ability.
Lastly, we'll learn a lot in 2026 about how thorny the problem of chatbots and mental health is. 2025 saw a flurry of stories about suicides and psychosis, followed by a scramble for the companies to implement mitigations. Is there an inherent tension between building an engaging chatbot and preventing these kinds of outcomes, or will the mitigations be enough?
Cody Venzke—Senior Policy Counsel on Surveillance, Privacy, and Technology, American Civil Liberties Union (ACLU)
As the saying goes: “everything old is new again,” and that will be true for tech policy in 2026 in three critical ways.
First, the tug of war between states and the federal government will continue. While states made significant strides in 2025 to address AI, federal policymakers were instead focused on preempting those efforts. Despite failing to do so twice, proponents of preemption will undoubtedly extend into 2026. Instead of attacking state regulation, federal policymakers should be learning from states’ best proposals.
Second, state AI legislation will continue, but perhaps at a lighter pace. Many states have shorter sessions in 2026, and budgeting will take up significant attention. Consequently, AI legislation may be budgetary or narrowly focused. Such approaches will be crucial in ensuring that AI is trustworthy.
Third, Congress will revisit and perhaps pass tech legislation. While a meaningful comprehensive privacy law is unlikely, many proposals have previously passed one chamber, are relatively uncontroversial, and should be table stakes: updating the COPPA and FERPA privacy laws, requiring a warrant for our emails, and closing the data broker loophole through the Fourth Amendment Is Not for Sale Act. Congress should finally pass these long-overdue fixes.
Authors

