Home

Donate

Reactions to the Bipartisan US House AI Task Force Report

Prithvi Iyer, Justin Hendrix / Dec 20, 2024

On December 17, the bipartisan House Task Force on Artificial Intelligence released its comprehensive report and key findings.

On Tuesday, a bipartisan US House of Representatives Task Force on Artificial Intelligence led by co-chairs Jay Obernolte (R-CA) and Ted Lieu (D-CA) released a 253-page report laying out recommendations for Congress on AI. The document articulates “guiding principles, 66 key findings, and 89 recommendations, organized into 15 chapters.”

Tech Policy Press collected fifteen reactions to this report from various experts and organizations, including some solicited by email and others posted elsewhere.

Alex Ault, Policy Counsel, Lawyers Committee for Civil Rights Under the Law:

While this report is a positive step, we’re focused on ensuring Congress doesn’t neglect the real problems uncovered by the AI task force. We can’t allow Congress to sign blank checks funding AI development while Black people and other communities of color pay the cost of AI adoption. The House Taskforce Report recognizes that trust is needed for these technologies to truly benefit everyone. That can’t happen if individuals continue to have their data scraped to build tools that work against them or are kept in the dark as to how and when AI tools are determining facets of their lives.

Last year, the Lawyers’ Committee released the ‘Online Civil Rights Act,’ and we’ve been proud to endorse the AI Civil Rights Act – concrete policies squarely aimed at ensuring AI has the safeguards needed to protect our rights and make AI work for everyone. It is imperative that our lawmakers do more than just discuss the necessary measures needed to ensure that AI serves the public good and doesn’t exacerbate longstanding inequities.

Kate Brennan, Associate Director of the AI Now Institute:

The Bipartisan House Task Force on AI released a 253-page report gesturing towards the many material harms of rapid AI adoption while providing few meaningful policy recommendations to address it. While it is encouraging to see the report articulate the harms perpetuated by algorithmic-decision making—such as bias, discrimination, and worker dislocation—the report is heavily tipped in favor of AI adoption and acceleration, even in sectors where the benefits of large-scale AI systems are less than proven.

In real time, we are watching large-scale AI become central to claims of US economic and national security interests. As such, it is no surprise to see national security and energy accelerationism take center stage in the report—including findings that ‘AI is a critical component of national security’ and ‘maintaining a sufficiently robust power grid is a necessity.’

While government reports articulating technological harms are important, we cannot shift attention away from critical sectors—including the defense industry, energy sector, and workplaces—where wholesale AI adoption and acceleration faces few legislative hurdles. Just this week the National Defense Authorization Act passed swiftly through the House and Senate this week, filled with troubling provisions to accelerate AI adoption and private industry entrenchment across the Department of Defense. Earlier this week it was reported that President Biden is considering executive action to fast-track the construction of data centers for AI, which would allow data centers to exceed pollution limits and dominate access to power supply. A policy agenda meaningfully addressing these troubling trends is critical.

Daniel Castro, Vice President, Information Technology and Innovation Foundation (ITIF) and Director of ITIF's Center for Data Innovation:

The new Bipartisan House Task Force Report on AI is a significant milestone, offering Congress a clear and actionable vision for AI governance in the United States. It provides a practical blueprint for promoting innovation, creating safeguards, and ensuring the U.S. remains a global leader in AI. The report's strength lies in its clarity—it precisely outlines what to regulate, who should regulate it, and how to do so effectively. This approach avoids unnecessary overreach, targeting truly novel risks and opportunities unique to AI that existing laws don’t adequately address.

By emphasizing sectoral regulators, the report ensures oversight is both practical and precise, equipping agencies with the tools and expertise needed to address AI-specific challenges in their domains. In addition, the discussion about federal preemption highlights the benefits of replacing the current patchwork of state laws with a unified national framework, reducing complexity and enabling innovation at scale without compromising safeguards. The report's strength is its clear-eyed assessment of various issues. It does not call for sweeping mandates or untested ideas but instead outlines a layered, pragmatic strategy that lawmakers can support to position the United States to lead confidently in AI development and governance on the global stage.

Neil Chilson, Head of AI Policy, Abundance Institute:

The House AI Task Force report is a comprehensive and thoughtful document that zeroes in on the critical goal of ensuring AI delivers economic and national security benefits. There’s much to appreciate: it supports open source, prioritizes addressing demonstrable harms over speculative risks, and highlights challenges like data-sharing restrictions in AI health applications. Among its many strengths, three key takeaways stand out. First, the report is optimistic about the transformative potential of AI while rejecting alarmist “AI doom” scenarios. It emphasizes the need for the U.S. to maintain its innovation leadership, recognizing AI’s ability to revolutionize agriculture, healthcare, finance, government, and national security. The Task Force also acknowledges that while AI may bring new challenges, it can mitigate existing harms, such as reducing deadly medical diagnostic errors.

Second, it advocates for a sectoral, incremental approach to AI regulation, steering away from sweeping, one-size-fits-all federal actions. By addressing gaps within specific sectors rather than through broad strokes, the report appropriately considers AI’s general purpose and fast-evolving nature.

Finally, the report underscores the importance of evidence-based, industry-led testing and metrics. Current evaluation methods often lack rigor, relying more on “vibes” than verifiable results. The Task Force rightly calls for a bottom-up, rules-based, multistakeholder process that prioritizes technical merit—an approach aligned with the U.S.’s proven standards-setting strengths.

Overall, the Report provides an excellent foundation for discussions about Congress's proper role in ensuring AI innovation delivers the abundance it promises. Much remains to be built on that foundation, however — especially how to prevent or cut through the growing thicket of state laws that are overwhelmingly contrary to the Task Force's measured, humble, and tech-savvy approach.

Willmary Escoto, Policy Counsel, Access Now:

We’re still making our way through the House AI Task Force report, but we’re encouraged by its focus on addressing civil rights and privacy risks posed by artificial intelligence. The recommendations—like maintaining human oversight, tackling discriminatory AI, empowering regulators, boosting transparency, and developing standards—are critical to promoting accountability and minimizing harm. It’s good to see bipartisan acknowledgment of the need to govern AI responsibly and align its use with fundamental rights.

The emphasis on addressing discriminatory AI decision-making is particularly important. Transparency and the push for standards development will also play key roles in building public trust and reducing harm. That said, as we dig in further, we already see opportunities to strengthen these proposals—for instance, requiring algorithmic impact assessments (AIAs) before deployment to identify and mitigate risks early on.

Overall, this report shows promise. However, more work is needed to explore the specifics of issues like generative AI, energy use, and accountability mechanisms that truly protect people. As this effort progresses, it’ll be essential to center community input and ensure real enforcement power to make these ideas stick. Access Now is ready to continue engaging in this to help shape an AI framework that prioritizes human rights.

Amina Fazlullah, Head of Tech Policy Advocacy, Common Sense Media:

The Bipartisan House Taskforce on AI makes a number of recommendations in areas that Common Sense Media strongly supports. For example, we support its recommendations for greater AI literacy for students and the general public, addressing the harms AI poses to civil rights and liberties and its focus on specific AI-enabled harms such as AI-generated child sexual abuse material. However, we believe the report’s ex-post, sector-based approach to regulation is insufficient to address AI safety concerns. By failing to call for the implementation of basic guardrails, transparency requirements, and safety-by-design early in the development process, Congress would enable gaps that allow serious, preventable harms to persist. Common Sense Media recommends that in 2025, Congress adopt a proactive, adaptive approach to addressing and mitigating the potential harms of AI misuse.

The recommendation on AI literacy is important because AI literacy builds trust by helping consumers understand how AI works, make informed decisions about its use, and identify risks, like bias, privacy concerns, and manipulative practices. We support the Task Force’s recommendation to invest in curricula that teach children digital skills and provide professional development for educators. However, and importantly, the report misses an opportunity to highlight how Congress can utilize existing programs, such as those established by the Digital Equity Act, to expand and improve AI literacy for users of all ages.

With regard to AI safety issues, the report's ex-post approach to AI safety limits Congress' ability to proactively shape the safe and equitable use of AI, leaving critical gaps in user protection. Excluding safety-by-design is a missed opportunity to marshal the entrepreneurial spirit of the AI sector and encourage collaboration with the government to develop products that prioritize safety from the outset. The report also favors a patchwork approach to mitigating the harms of AI misuse, exacerbating legal inconsistencies and enforcement challenges, which undermines an adaptive approach to AI regulation. While we commend the focus on bias mitigation and civil rights protections, proactive measures that ensure transparency, safeguard privacy, and promote consumer trust are essential to fostering responsible innovation.

In the absence of comprehensive AI regulation, Congress must, at a minimum, establish a strong regulatory floor that ensures consistent protections nationwide while supporting state efforts to address AI-enabled harms. While it's true that no single piece of legislation can encompass all the unseen consequences of AI, waiting to act risks leaving users unprotected. That’s what was done when social media emerged more than 20 years ago, and we see the harms from that approach today. AI's rapid evolution will demand continual legislative and regulatory evaluation and adaptation. By taking steps to address safety today, Congress can lay the groundwork for a more comprehensive and proactive framework that ensures a strong but also healthy AI ecosystem for all. We look forward to continuing to work with Congress to create a digital world that is safe, accessible, and welcoming for kids, teens, and all users in the years ahead.

Alexandra Reeve Givens, President and CEO, Center for Democracy and Technology:

The report includes important discussion of the risks of bias presented by AI systems, the privacy risks associated with training AI and its outputs, and the key role the government plays in defining responsible use of and setting standards for AI. These questions must be addressed if users and businesses are going to trust AI and embrace its widespread adoption.

The report continues Congress's longstanding, bipartisan focus on ensuring responsible guardrails for the government's own use of AI. Congress has prioritized this since it first passed legislation in 2020 requiring OMB to issue binding guidance that ultimately became Memo M-24-10. As the Trump Admin considers its approach to AI, it should remember that Congress has demanded principled rules & transparency in this space.

Read the full statement here.

Jason Green-Lowe, Executive Director, Center for AI Policy (CAIP):

The Center for AI Policy commends the release of the Bipartisan Artificial Intelligence Task Force's landmark report. This wide-ranging report marks a critical step forward in Congressional oversight of artificial intelligence (AI). It consistently points out the need to mitigate the risks posed by AI and suggests several helpful pathways for doing so, including further investment in AI standards, evaluations, and research.

CAIP agrees with the report’s authors that ‘a thoughtful, risk-based approach to AI governance can promote innovation rather than stifle it.’ CAIP maintains that robust safety measures are not obstacles to innovation and American technological leadership but essential catalysts for sustainable progress. Likewise, we are pleased to see the report acknowledge that some risks are ‘truly new for AI due to capabilities that did not previously exist’ and that in some cases, ‘when an AI issue has emerged recently…we need to more thoroughly consider how well existing regulatory regimes address that issue.’Drawing from extensive stakeholder engagement and thorough analysis, the report represents a significant achievement in developing a consensus-driven approach to one of the world's most transformative technological developments.

Damon T. Hewitt, President and Executive Director, Lawyers’ Committee for Civil Rights Under Law:

We are pleased that this report underscores what we’ve been saying for years—for AI to benefit all of us collectively, Congress must act to ensure that this emerging technology doesn’t violate our civil rights. Now that both chambers of Congress have analyzed this issue, it is time for action. Nuanced and comprehensive legislation, like the AI Civil Rights Act, must immediately follow to address the harms AI is known to cause, especially for Black people and communities of color. Without intentional safeguards and tangible legislative steps, these technologies will be used as tools of systemic oppression. There is not a moment to lose; Congress must act swiftly and decisively to protect the rights of the people of our country in the digital age.

Laura MacCleery, Senior Policy Director, UnidosUS:

This week’s bi-partisan Congressional Report on AI highlights a critical challenge: while AI systems are rapidly becoming a backbone for decision-making across sectors, we cannot yet be sure they are fair. While it is frequently assumed that AI fairness is about preventing historical forms of discrimination, the truth is more complex: AI systems can develop unexpected, and even arbitrary, preferences. As we expand AI deployment across housing, employment, education, and criminal justice, assuring basic fairness is an essential safeguard. Without it, we will undoubtedly have consequential but unfair outcomes that are both undetected and uncorrected.

The Report also rightly flags the profound relationship between uses of AI and civil liberties. In the current moment, we must recognize that our widespread lack of data privacy and foundational data minimization poses the real “x-risk” for this technology, given the threat of manipulation of public opinion, including micro-targeting of highly tailored disinformation, the growing use of facial and bio-metric recognition tools, and the consequences these all may have when powered at scale for a fair, free, and democratic society.”

Dr. Alondra Nelson, distinguished senior fellow at the Center for American Progress and Harold F. Linder Professor at the Institute for Advanced Study:

A year ago, there was some promise of bipartisan consensus around some of the most perilous risks posed by unchecked artificial intelligence (AI) in Americans’ lives. We had the potential to learn from the mistakes made at the dawn of social media when we let the industry and products evolve with few guardrails.

At the executive level, the Biden-Harris administration has laid out principles to guide the design, use, and deployment of automated systems to protect the American public and empowered agencies to provide new AI oversight, use existing tools, and leverage the federal government’s vast purchasing power to influence the safety of these tools. While much more should have been done to turn consensus into legislative action this past year, I am encouraged by the Task Force's efforts to seek common ground on fundamental issues like privacy, worker protections, and safeguards against bias and discrimination in automated decision-making.

This report makes clear that government agencies should be exemplars of AI’s responsible use and that government leadership includes moving quickly to utilize existing regulatory pathways, echoing President Biden’s call in his Executive Order for AI, for government to pull every lever to ensure the United States is a role model for innovation — one that combines ingenuity with the preservation of democratic values.

Crucially, this report addresses civil rights and civil liberties issues and upholds the importance of intimate privacy, calling out the risks synthetic media pose to that privacy. It spotlights the use of deepfakes and voice cloning to conduct fraud and undermine trust in information and institutions, and recommends the uptake of content authenticity practices to combat these dangers. It emphasizes that open AI foundation models can be a source of innovation and that evidence-based analysis should be taken in evaluating the risks of their deployment. It rightly notes that the responsibility for improving this ecosystem lies across all actors in the AI lifecycle, from developers to deployers.

Congress has mostly failed to act on AI governance, but this report’s recommendations offer a roadmap to meet the moment, demonstrating that advancing the responsible use of AI is a shared American goal. Protecting the American people from harm as new technologies develop has never been easy — and in a politically charged and polarized environment, it will not be. But it’s important to commend the good faith efforts being made to pursue these safeguards at a time when tech CEOs are starting to call the shots in Washington. As the foremost creators of this technology, America bears responsibility for its safe use. This bipartisan report makes clear that the next Congress is the time to act on AI legislation.

Courtney C. Radsch, Director, Center for Journalism and Liberty (and a board member at Tech Policy Press):

A Bipartisan House Task Force has made a valiant effort to put forth what they say are guiding principles, forward-looking recommendations, and policy proposals, but the report falls flat on some of the most consequential issues related to competition, privacy, data governance, and intellectual property. Given the existing market dominance in key areas of the AI ecosystem, including chips, compute, data, and talent, and the threat that concentration of power among a handful of Big AI firms poses to national security, resiliency, and innovation, it was surprising to see that antitrust and competition policy relegated to the appendix as future areas for exploration. Of particular interest to our work at Open Markets Institute and the Center for Journalism and Liberty was how the task force addressed the role of data in AI systems and its attendant recommendations. There are many issues to delve into in this 273-page report but we focus here on two parts: agriculture and intellectual property.

I was excited to see a section on agriculture, given what we know about the dominance of a handful of Big Ag firms in our food system and the data it generates, coupled with the importance of data in AI systems. Wow, what a disappointment. I asked my colleague Claire Kelloway, who leads Open Market’s food and farming work, for her thoughts. ‘The agriculture section of this report overlooks competition dynamics in the precision agriculture space. As it stands, dominant agribusiness corporations command access to the critical data necessary to train and run AI-powered precision agriculture programs. These corporations have a profit incentive to recommend that farmers use their products and ecologically destructive monocropping paradigms in their AI-driven farm management prescriptions. The USDA and policymakers should be promoting more open access to agricultural data to foster a more competitive digital agriculture market and, by extension, more diverse and sustainable farming methods.’ The House missed a huge opportunity to address an issue that affects the price we pay for eggs (a recurring theme in the latest presidential race) and our favorite family traditions.

Similarly, the IP section was a giant letdown. The report notes the investment and effort required to ‘identify, assemble, clean, curate and otherwise develop specialized training data, yet a potential mitigating factor for transparency requirements on AI companies is barely acknowledged. The section on rights holders does not similarly acknowledge the investment and effort involved to, for example, gather the news, learn how to act, or write a book. Although the discussion of deepfakes mentions that artists and performers derive income from their voice and likeness, this is but one aspect of a far larger problem that is insufficiently addressed, and neither journalism nor the news industry is mentioned in the report.

Furthermore, the task force reiterates AI companies’ assertions that deterring IP protections with respect to inputs ‘could be extremely challenging to do at scale.’ Yet just because something is challenging does not mean that it should not be required and should not absolve companies of their responsibility. Certainly, determining calorie counts and nutritional information for food products was not easy, but we nonetheless require food producers to do so. The conclusion of the IP section is exactly wrong – there is an urgent need to for Congress to clarify the IP laws with respect to inputs, as a group of senators acknowledged early this year at a hearing on AI and journalism that showed bipartisan consensus that using publisher content to train AI systems without permission or compensation is not fair use. Congress is once against abdicating its role to legislate (think on privacy) and instead suggesting that we wait years for lawsuits to work their way through the legal system while facts on the ground encourage an entire new industry based on theft to emerge and establish itself.

Adam Thierer, R Street Institute:

Generally speaking, the report steers a middle course and sets the stage for a fresh approach to AI policy when the new Republican Congress takes control next year. There’s much to like about the vision and principles sketched out in the document. It smartly avoids one-size-fits-all silver-bullet solutions and stresses the need for more flexible and incremental approaches to AI governance.

But the report also leaves many key questions unanswered and the biggest of them is what to do about the rapidly expanding patchwork of state and local AI regulatory efforts. Sadly, the House AI Task Force report completely punts on that vitally important issue. Unfortunately, abdication of responsibility on preemption could essentially make all the other positive recommendations in the report irrelevant because it leaves the door wide open for 2025 to become the year that the mother of all regulatory patchworks gets imposed on algorithmic innovators in America. Many states are proposing dangerous regulatory schemes that follow a misguided law that passed in Colorado in May, almost passed in Connecticut, and is being floated in Texas currently. These and other states are basically moving to bring the European Union’s disastrous digital technology regulatory regime to America.

If Congress wants to ensure the vision articulated in the new House AI Task Force report has real meaning and effect, it will need to deal with this problem sooner rather than later to ensure that America remains “the world’s undisputed leader” in AI, which the report makes a top priority as we face stiff competition from China and other nations on this important front.

Read Thierer’s full analysis here.

Cody Venzke, Senior Policy Counsel, American Civil Liberties Union:

The House Task Force on Artificial Intelligence’s new report marks an important step in recognizing the serious risks of discrimination and bias posed by AI systems, particularly in law enforcement’s use of facial recognition technology. The report recognizes that discrimination and bias may arise in AI systems due to their design, training data, or failures by end users. The Task Force’s bipartisan report is an important recognition that AI systems are harming people right now through discriminatory outcomes and supercharged surveillance.

While the ACLU commends the Task Force for acknowledging these harms, more concrete action is needed to protect civil rights, ensure transparency, and prevent AI systems from perpetuating systemic inequities. Curbing those abuses of AI is not a partisan issue, and Congress should continue working to introduce and pass legislation to address discrimination and bias, such as Sen. Markey and Hirono’s AI Civil Rights Act.

The report also investigates the complex considerations around generative AI, open AI systems, and free speech, concluding that approaches must be tailored to address concrete, “demonstrable” harms. The Task Force is appropriately cautious when wrestling with deepfakes, content authentication, digital identity, open-source AI, and other issues. The report recognizes that legislators should consider a wide array of tailored tools to address real – not speculative – harms while respecting civil rights and civil liberties. The call to better understand when AI actually poses risks to privacy, elections, national security, and safety echoes the recommendation from the National Telecommunications and Information Administration to build governmental infrastructure for monitoring and assessing AI harms – a call the ACLU supports.

Finally, the report examines federal preemption of state law. Here, the lesson is simple: Congress should avoid unnecessarily cutting off states’ authority to establish robust, meaningful protections. States have long been the “laboratories of democracy,” and that has been emphatically true around technology. Although states’ approaches may be imperfect, they play a critical role in developing legislation, innovating in policies, and establishing guardrails. Federal legislation should serve as a floor upon which states can build protections; overbroad preemption will unnecessarily calcify our collective policy apparatus and leave harms unaddressed.

Maya Wiley, President and CEO of The Leadership Conference on Civil and Human Rights:

AI is here. Yet, it has few standards or guardrails, and it can and has harmed real people. Too often, these systems inexplicably deny life-saving health care, reject mortgages that would help house deserving families, or jail people for crimes they didn't commit — all due to faulty AI that must be regulated. Today’s report balances the opportunities for our nation’s leadership in AI innovation while recognizing the critical need for guardrails that ensure that AI systems are fair and safe for everyone. This report signals that there are advocates on both sides of the aisle in Congress who understand these real harms and that safeguards are essential to guarantee fair and safe AI for all.

The task force illustrated the need to educate, train, and upskill students and workers so that everyone can enjoy the economic boom of AI. The report's authors also make clear that data privacy and AI systems are inextricably linked, and they acknowledge the many harms that occur when our personal data are abused. There are also notable gaps. We need concrete measures to address potentially faulty AI, like regular testing, assessments, and a prohibition on biased AI. We also need specifics to combat AI-generated disinformation that undermines people’s right to vote, including methods to identify and address the use of generative AI, manipulated media, and deepfakes.

I’d like to personally thank House Minority Leader Hakeem Jefferies (D-NY) and Task Force Co-Chair Ted Lieu (D-CA) for their leadership in ensuring that civil rights were duly considered. In drafting this report, the task force heard from a variety of experts across sectors — including myself and members of The Leadership Conference’s Center for Civil Rights and Technology Advisory Council, like Damon Hewitt, Amanda Ballantyne, and Sorelle Friedler, alongside other critical civil society voices — and its contents are better for it. We look forward to working with both sides of the aisle to translate protections into law. We hope that this report’s findings and recommendations on civil rights inform legislation in the next session.

We all lose if AI developers and deployers move too fast and break things without regard for who or what they harm. That’s not a path towards fairness. That’s not a path toward true innovation. We will continue to hold decision-makers accountable to the values outlined in this report and fight for a fair AI future for everyone.”

Related Reading

Authors

Prithvi Iyer
Prithvi Iyer is a Program Manager at Tech Policy Press. He completed a masters of Global Affairs from the University of Notre Dame where he also served as Assistant Director of the Peacetech and Polarization Lab. Prior to his graduate studies, he worked as a research assistant for the Observer Resea...
Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & Inno...

Topics