Home

Donate

January 2024 US Tech Policy Roundup

Rachel Lau, J.J. Tolentino / Feb 1, 2024

Rachel Lau and J.J. Tolentino work with leading public interest foundations and nonprofits on technology policy issues at Freedman Consulting, LLC.

Tech CEOs testify to the Senate Judiciary Committe, January 31, 2024. (l-r) Jason Citron, Discord; Evan Spiegel, Snap; Shou Chew, TikTok; Linda Yaccarino, X; Mark Zuckerberg, Meta.

Perhaps the most consequential moment for tech policy in the United States in the first month of 2024 came on its final day. On January 31, the Senate Judiciary Committee held a blockbuster hearing where senators grilled top social media company executives on their failure to protect children online. In one compelling moment, Meta CEO Mark Zuckerberg apologized to families whose children were impacted by social media harms after lawmakers released internal emails showing that Meta declined to hire staff to improve children’s online well-being.

Whether there will be any legislative outcomes remains to be seen. Although federal kids’ online privacy laws remained largely stalled in Congress, movement in states and industry continued. Meta announced that it would hide content related to suicide, self-harm, and eating disorders for teens and automatically apply the most restrictive content controls on Instagram and Facebook for all teens. Instagram CEO Adam Mosseri followed up the announcement with a video statement outlining the new Instagram content controls. Global Head of Safety Policy at Meta, Antigone Davis, published a framework for federal child online safety legislation focused on parental controls, age verification at the app store, and content and ad-targeting standards. The developer of Snapchat, Snap Inc., endorsed the Kids Online Safety Act (KOSA, S. 1409) ahead of Snap CEO Evan Spiegel's testimony at the Senate Judiciary Committee. X also publicly endorsed the Stop CSAM Act of 2023 (S. 1199). A group of families who lost children in connection to online harms, Parents for Safe Online Spaces (ParentsSOS), also endorsed KOSA. NetChoice, a trade association, sued Ohio to stop the Social Media Parental Notification Act, arguing that the online child privacy law violates constitutional rights.

Also, on Capitol Hill, the House Financial Services Committee announced the formation of an AI working group to examine how AI is impacting the financial services and housing industries. Additionally, a group of 17 Democratic senators and one independent led by Sen. Raphael Warnock (D-GA) sent a letter to the Department of Justice (DOJ) calling for the DOJ to scrutinize facial recognition tools for civil rights violations. The letter followed a report published by the National Academies of Sciences, Engineering, and Medicine calling for federal oversight of the facial recognition industry and shared standards for evaluation. Finally, the Senate Judiciary Committee hosted a hearing on “AI in Criminal Investigations and Prosecutions” in the Subcommittee on Criminal Justice and Counterterrorism.

On the agency side, in January, the National Science Foundation (NSF) launched a $16 million program in cooperation with the Ford Foundation, the Patrick J. McGovern Foundation, Pivotal Ventures, Siegel Family Endowment, and the Eric and Wendy Schmidt Fund for Strategic Innovation. The Responsible Design, Development, & Deployment of Technologies (ReDDDoT) program “seeks to ensure ethical, legal, community and societal considerations are embedded in the life cycle of technology’s creation and use” and will support research, education, and community empowerment. Also, this month, the Office of Management and Budget (OMB) and the Office of Personnel Management (OPM) opened applications for the Tech Hiring Professional Training and Development Program, which seeks ideas for designing an innovative training program for federal hiring of technical professionals.

At the Supreme Court, civil society, elected officials, and industry filed a wave of amicus briefs in the NetChoice, LLC v. Paxton and Moody v. NetChoice, LLC cases, which are scheduled to be heard in February. Submitters included the American Economic Liberties Project, which filed an amicus brief in NetChoice v. Paxton alongside a group of academics, including Tim Wu, Professor of Law at Columbia University and former special assistant to President Biden for technology and competition policy. They argued in favor of Texas, despite substantive disagreement with the state’s laws, warning that rule otherwise “would risk granting a broad and unjustified immunity to social media platforms from nearly any regulation in the public interest” and “implicate teenage social media laws, nascent AI regulation, and proposed antimonopoly laws.”

Action at the Federal Trade Commission (FTC) continued as they hosted the FTC Tech Summit, a half-day summit focused on AI in relation to chips and cloud infrastructure, data and models, and consumer applications. Also, the Electronic Privacy Information Center (EPIC) filed a complaint with the FTC, arguing that an automated system called “Fraud Detect” used in 42 states to detect public benefits fraud is ineffective in its fraud predictions and violates federal standards for responsible algorithmic decision-making systems. The National Health Law Program (NHeLP), the Electronic Privacy Information Center (EPIC), and Upturn also filed an FTC complaint alleging that Deloitte’s Texas Integrated Eligibility Redesign System (TIERS) is inaccurate and unreliable when determining Medicaid eligibility, leading to the incorrect rejection of eligible people from the Medicaid program.

In other industry news, OpenAI changed its policy to allow military customers and uses, explaining that their policy “does not allow our tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property” but that there are “national security use cases that align with [their] mission.” Also, with implications for AI, researchers published findings on vulnerabilities in many brands and models of common graphics processing units (GPUs), which are key for AI and big data processing. These vulnerabilities could allow attackers to steal significant amounts of data from a GPU’s memory. In response to this finding, Apple, Qualcomm, and AMD all confirmed that their products were impacted, while Nvidia, Intel, and Arm GPUs were not found to contain the vulnerability. Amazon announced that Ring, a doorbell camera product, will “no longer let police and other government agencies request doorbell camera footage from within the company’s Neighbors app” and instead will require public safety agencies to file a formal legal request for footage. Finally, Bloomberg reported that Apple will face a Justice Department antitrust lawsuit as early as March.

Read on to learn more about January developments in AI executive order implementation, action from the FTC on location data and AI partnerships, and concerns about political deepfakes from generative AI.

Federal Agencies Announce Major Progress in AI Executive Order Implementation

Summary

January 28 marked 90 days since the release of the AI executive order, and many federal agencies announced progress in meeting implementation requirements at the end of the month. Commerce, OMB, OPM, General Services Administration, National Science Foundation, the Department of Labor, and others indicated that they are on track to meet early deadlines set by the October 30, 2023 AI executive order. Early requirements within the executive order focused on actions regarding the federal government’s use of AI, addressing the AI talent surge, and bolstering government funding for AI and access to resources for AI research. As of January 4, 2024, according to FedScoop, 14 out of the 24 Chief Financial Officer Act agencies had designated a Chief AI Officer to develop a strategy to manage their agencies’ use of AI. The White House released a fact sheet celebrating the three-month mark from the release of the AI EO that highlighted the administration's continued efforts to protect Americans from the potential risks and harms of AI while advancing innovation. The White House noted that nine government agencies had submitted risk assessments to DHS and that agencies had completed all of the 90-day tasks required by the executive order. Other highlights include:

  • The National Science Foundation launched the National Artificial Intelligence Research Resource (NAIRR). The NAIRR pilot provides researchers with the “resources needed to carry out their work on AI, including advanced computing, data, software, and AI models.” NAIRR includes data and resources from 11 federal agencies and over 20 private sector partners, including Microsoft, Intel, and IBM. As part of the launch, the NSF and DOE are accepting proposals from interested researchers whose projects seek to advance safe, secure, and trustworthy AI. The AI executive order required the NSF to establish a pilot program implementing NAIRR within 90 days of its release in October.
  • AI companies are required to share safety test results with the Department of Commerce. A priority among the 90-day goals from the executive order was “a mandate under the Defense Production Act that AI companies share vital information with the Commerce Department, including safety tests.” The requirement is designed to ensure that AI systems that may “pose risks to national security” are deemed safe prior to being released to the public. It is unclear which companies will be required to share information with the federal government, but these actions marked “the first formalized safety and security information-sharing program” between industry and the government.
  • The White House convened experts to discuss competition in the AI ecosystem. On January 19, Lael Brainard, Assistant to the President and National Economic Advisory, convened a meeting with representatives from various federal offices, civil society stakeholders, academic experts, and industry leaders to discuss “AI policy that promotes fair, open, and competitive markets.” Participants addressed the risks of concentration in the AI ecosystem and ways to ensure AI systems can be used across industries, as well as the possible harms from a lack of competition with respect to prices, quality, innovation, and privacy. Following the meeting, the Biden Administration emphasized that promoting competition and innovation in the AI ecosystem is central to their AI policy agenda.
  • Federal agencies release RFIs to inform policy development. In accordance with the executive order, OMB, USAID, and the State Department released RFIs seeking feedback regarding, respectively, privacy impact assessments and the AI in Global Development Playbook.

Stakeholder Response

In a Senate Homeland Security and Governmental Affairs Committee hearing on using AI to improve government services, Sen. Gary Peters (D-MI) voiced that federal procurement of AI will “have a big impact on AI throughout the economy” and serves as an effective way to think about AI regulation. Former US Deputy Chief Technology Officer and Niskanen Center Senior Fellow Jennifer Pahlka urged lawmakers to simplify federal procurement requirements and suggested that addressing long-standing problems with government hiring should be a priority due to the AI executive order’s push for more federal AI talent.

On January 26, a group of public interest organizations released a list of shared priorities reflected in comments to OMB’s November 2023 draft guidance for “governance and risk management for federal agencies’ use of AI,” as required by the AI executive order. Priorities included expanding the guidance’s scope to ensure algorithmic decision-making by federal agencies is “safe and equitable,” strengthening transparency requirements, and issuing additional guidance to agencies on “evaluating and mitigating risks of discrimination and other harms,” among others. Organizations that shared common priorities include the Center for Democracy & Technology, American Civil Liberties Union, Brennan Center for Justice, Center for American Progress, Data & Society, Electronic Privacy Information Center, Leadership Conference on Civil & Human Rights, Legal Defense Fund, and Upturn.

What We’re Reading

The Brookings Institution’s Nicol Turner Lee and Jack Malamud explored how Congress has so far failed to regulate AI despite executive action and what lawmakers should do to build on the Biden Administration’s efforts. Politico highlighted ongoing efforts from conservatives to challenge the AI executive order by claiming that President Biden is abusing the Defense Production Act by using it to regulate and compel reporting by AI companies. FedScoop has been tracking agencies’ CAIOs as they’re announced. Tech Policy Press examined overarching tech policy themes from 2023 and what to expect in 2024. In Politico, Ganesh Sitaraman, director of the Vanderbilt Policy Accelerator for Political Economy and Regulation, and Tejas N. Narachania, Faculty Director of the Berkeley Center for Law and Technology, discussed the range of tools lawmakers have at their disposal to regulate AI and safeguard competition.

FTC Notches Wins on Location Privacy and Launches Major Generative AI Study

Summary

The FTC had a busy month, kicking off 2024 by issuing two proposed orders banning data brokers X-Mode and InMarket from selling consumer location data, as well as launching a major inquiry into generative AI partnerships.

  • In a settlement with X-Mode Social and its successor Outlogic, the Federal Trade Commission published a proposed order prohibiting the company from “collecting, using, maintaining, or disclosing a person’s location data absent their opt-in consent” and requiring that X-Mode “adopt policies and technical measures to prevent recipients of its data from using it to locate a political demonstration, an LGBTQ+ institution, or a person’s home.” The FTC’s complaint emphasized the dangers of the collection and sale of personally identifiable data, especially location data. It charged X-Mode with multiple charges of unfair or deceptive conduct in violation of the FTC Act. FTC Chair Lina Khan explained the unique dangers of geolocation data, saying that “geolocation data can reveal not just where a person lives and whom they spend time with but also, for example, which medical treatments they seek and where they worship. The FTC’s action against X-Mode makes clear that businesses do not have free license to market and sell Americans’ sensitive location data." Following the X-Mode case, the FTC also banned InMarket, a targeted advertisement company, from “selling, licensing, transferring, or sharing any product or service that categorizes or targets consumers based on sensitive location data.” The FTC charged that the company failed to “fully inform consumers and obtain their consent before collecting and using their location data for advertising and marketing.”
  • The FTC also launched an inquiry under Section 6(b) of the FTC Act that will explore corporate partnerships and investments with AI providers to better understand their impacts on free markets and healthy competition. As part of this inquiry, the FTC issued subpoenas to Alphabet, Amazon, Anthropic, Microsoft, and OpenAI, “requiring them to provide information regarding recent investments and partnerships involving generative AI companies and major cloud service providers.” The inquiry specifically investigates partnerships between Microsoft and OpenAI, Anthropic and Google, and Anthropic and Amazon. These studies can serve as preludes to FTC investigations and enforcement actions, as well as inform legislation and other policymaking efforts. Also ongoing, were discussions between the FTC and DOJ on which agency has jurisdiction to review OpenAI and Microsoft’s relationship.

Stakeholder Response

In response to the proposed settlement, a statement from Broadsheet, a public relations firm representing X-Mode, argued that they “disagree with the implications of the FTC press release. After a lengthy investigation, the FTC found no instance of misuse of any data and made no such allegation.” The Electronic Frontier Foundation celebrated the X-Mode and InMarket proposed orders, calling it “welcome news that the Federal Trade Commission has brought a successful enforcement action against X-Mode Social (and its successor Outlogic).” Finally, Public Knowledge celebrated the FTC’s 6(b) inquiry, calling it a “critical study” that hopefully “will lead to a more competitive digital marketplace and strong regulatory oversight of AI markets.”

What We’re Reading

WIRED broke down the X-Mode case, arguing that the result did not go far enough to protect all people’s location data and that the emphasis on consent from users was moot, given the fact that most consumers give consent to privacy policies without true comprehension. JD Supra outlined five lessons from the case for businesses. Researchers at Imperial College London published a paper on the ease of identification in location datasets, finding that “individuals are likely re-identifiable” in anonymized location data. 404 Media published a feature article on Patternz, a surveillance tool that can track personal devices through advertisements in mobile apps. Sydney Brinker wrote about how a lack of online data privacy poses an especially difficult threat for the disabled community in Tech Policy Press. The Washington Post reported on the 6(b) inquiry, including reactions from key industry stakeholders, and Bloomberg provided more detail on the subpoenas.

Political Deepfakes and AI-Generated Content Are Causing Concerns Ahead of the 2024 Elections

Summary

As the 2024 elections approach, concerns have grown over the use of political deepfakes and other digitally altered content to misinform voters and undermine democracy. Prior to New Hampshire’s January primary election, voters reported that they had received unauthorized robocalls replicating President Biden’s voice urging them not to go to the polls. The digitally altered calls, which were allegedly created using AI startup ElevenLabs’ voice-cloning technology, highlighted the potential for AI-generated media to undermine the 2024 election and “reignited calls by officials for federal action.” In an effort to protect the public from harmful deepfakes, a bipartisan group of lawmakers introduced The No Artificial Intelligence Fake Replicas and Unauthorized Duplications (No AI FRAUD) Act (H.R.6943), which would protect individuals from unauthorized “duplication, replication or falsification” of their voice, image, or likeness.

Stakeholder Response

In an interview with ABC News, No AI FRAUD Act sponsor Rep. Maria Elvira Salazar (R-FL) noted that “everyone should be entitled to their own image and voice, and my bill seeks to protect that right” and said the bill would punish individuals who use generative AI to replicate the likeness of others without consent. Sen. Amy Klobuchar (D-MN) warned that robocalls, as seen in New Hampshire, have “the potential to destroy democracy” and that federal action cannot wait until the fall.

OpenAI reported that it is rolling out new policies and AI-generated image detection tools designed to combat misinformation ahead of the 2024 elections. The new policies included a ban on OpenAI’s users from using the company’s tools to “impersonate candidates or local government officials,” prohibited users from using tools to “discourage voting or misrepresent the voting process,” and disallows users from “engaging in political campaigning or lobbying,” including personalized campaign materials. The company is also working with the National Association of Secretaries of State to direct ChatGPT users to CanIVote.org for information on US voting and elections. Despite OpenAI’s new policies, the Mozilla Foundation’s Jess McCrosky found that the policies were not enforced and, with a simple 5-minute experiment, was able to generate personalized campaign ads using ChatGPT.

What We’re Reading

  • News and op-eds: Anticipating the 83 national elections in 78 countries worldwide that will affect more than an estimated 4 billion people this year, The New York Times examined the risks posed by a potential wave of disinformation and misinformation as new generative AI capabilities and existing challenges with malicious state actors and extremism grow. The New York Times also highlighted renewed efforts by state legislatures to address political deepfakes ahead of the 2024 elections. NBC News analyzed how ongoing social and political tensions with continued advances in technology have created an environment where disinformation poses an unprecedented threat in 2024. Laleh Ispahani, executive director of Open Society-US, argued in The Messenger (R.I.P.)for lawmakers to act quickly in order to mitigate the potential risks that unfettered AI technology poses to democracy.
  • Research and reports: The World Economic Forum released a briefing paper series, including efforts that address the ethical and responsible development of generative AI, the disruptive potential of generative AI and how to harness its benefits, and setting the groundwork for “resilient and inclusive global governance.” Researchers at the National Institute of Standards and Technology (NIST) released a report that found that AI systems can be deliberately “poisoned” when exposed to untrustworthy data. For the Brookings Institution, NYU Center for Social Media and Politics researchers Zeve Sanderson, Solomon Messing, and Joshua A. Tucker conducted a review of academic literature to shed light on AI's impact on online misinformation and how lawmakers can safeguard against AI-related election risks. Aspen Digital announced the launch of the AI Elections Initiative to safeguard US elections from the harms of generative AI.

New Legislation

  • Block Foreign-Funded Political Ads Act (H.R.6696, sponsored by Reps. Ted Lieu (D-CA), Zach Nunn (R-IA), Don Beyer (D-VA), and Marcus Molinaro (R-NY)): This bill would mandate that broadcasting stations, cable and satellite television providers, and online platforms “make reasonable efforts to ensure that political advertisements are not purchased by a foreign national.”
  • No Artificial Intelligence Fake Replicas And Unauthorized Duplications (No AI FRAUD) Act (H.R.6943, sponsored by Reps. María Elvira Salazar (R-FL) and Madeleine Dean (D-PA)): This bill would establish federal protections for individuals’ control over identifying characteristics, such as their likeness and voice, against AI-generated replications. It would empower people, including artists and creators, to legally challenge the use of such depictions and seek financial remedies, including resulting injuries they suffered and profits gained by the infringing party through the use of their likeness/voice.
  • Federal Artificial Intelligence Risk Management Act (H.R.6936, sponsored by Reps. Ted W. Lieu (D-CA), Zach Nunn (R-IA), Don Beyer (D-VA), and Marcus Molinaro (R-NY)): This bill would mandate US federal agencies and vendors to follow AI risk management guidelines outlined by NIST. Specifically, entities would be required to integrate the NIST framework into their AI management practices, encompassing four crucial functions: governing, mapping, measuring, and managing. Governing establishes a culture of risk management, mapping recognizes contextual factors, measuring involves assessing and tracking identified risks, and managing entails prioritizing and addressing risks based on projected impact.
  • Source Code Harmonization And Reuse in Information Technology (SHARE IT) Act (S.3954, sponsored by Sens. Ted Cruz (R-TX) and Gary C. Peters (D-MI)): This bill would mandate government-wide sharing of source code to minimize redundant technology development, enhance efficiency, and promote innovation in federal information technology systems. It would require agencies to store custom-developed code in accessible public or private repositories to foster collaboration and ensure ownership of software components and associated technical elements.
  • Medicare Transaction Fraud Prevention Act (S.3630, sponsored by Sens. Mike Braun (R-IN) and Bill Cassidy (R-LA)): This bill would amend Title XI of the Social Security Act to introduce a pilot program to assess the efficacy of a predictive risk-scoring algorithm for overseeing payments related to “durable medical equipment and clinical diagnostic laboratory tests under the Medicare program.”

Public Opinion

Artificial Intelligence

Arctic Wolf commissioned the Center for Digital Government to survey over 130 state and local government leaders in the United States in November 2023. It found that:

  • The top two threats identified by respondents in the past year concerning AI-powered threats include disinformation campaigns (50.7 percent of respondents) and phishing attacks targeting election officials or staff (47.1 percent of respondents).

A World Economic Forum poll of 1,490 experts across public, private, and academic sectors conducted between September 4 - October 9, 2023, found that:

  • Over half (53 percent) of respondents say they believe AI-generated disinformation and misinformation rank as most likely to present as a “material crisis” in the near future (2-year timeframe).
  • In the longer term (10-year timeframe), AI-related risks rank lower compared to environmental concerns, such as extreme weather events and natural resource shortages, but remain relatively high. Misinformation and disinformation ranked fifth among ten global risks, while the adverse outcomes of AI technologies came in sixth on the list.

In the first wave of a new quarterly survey, Deloitte surveyed more than 2,800 people who work in managerial and administrative roles across six industries and 16 countries between October and December 2023. It found that:

  • About 75 percent of respondents anticipate that generative AI will bring about a transformation in their organizations within the next three years.
  • About 25 percent believe that their organizations are “highly” or “very highly” prepared to manage governance and risk issues associated with the adoption of generative AI.
  • 47 percent of respondents believe that their organizations are adequately educating employees on the capabilities, benefits, and value of generative AI.
  • 51 percent of respondents are concerned about the capacity of generative AI to exacerbate economic inequality.

A recent poll by AI Impacts surveyed 2,778 AI expert researchers between October 11 - 24, 2023, to seek their insights into the progress and impact of AI. It found that:

  • Aggregate responses indicate the experts surveyed predict a 10 percent chance of high-level machine intelligence, defined as when “unaided machines can accomplish every task better and more cheaply than human workers” by 2027, and a 50 percent likelihood by 2047.
  • 68.3 percent of respondents believe that positive outcomes from AI outpacing human performance are more likely than negative ones.
  • Over half of the respondents express that “substantial” or “extreme” concern is justified for certain AI-related scenarios, including the spread of false information (86 percent of respondents), dangerous groups creating powerful tools (73 percent), authoritarian population control (73 percent), and exacerbated inequality (71 percent).

The AI Policy Institute conducted a survey of 1,022 voters on January 15, 2024, via online samples. It found the following:

  • 76 percent of voters express a preference for candidates who endorse AI regulation.
  • A majority (55 percent) of respondents desire a bipartisan approach to AI regulation.
  • 61 percent of respondents either “strongly” or “somewhat” support the current set of proposals for AI legislation in the Senate.

Google commissioned Ipsos to conduct a 17-country survey of over 17,000 people on experiences with and future expectations of artificial intelligence (AI). It found that:

  • Over half of respondents (54 percent) believe AI will likely benefit them.
  • 52 percent of respondents expect positive impacts on health and well-being in the next five years.
  • When asked about how important different AI applications would be for various areas of society, medical breakthroughs top the list across the 17 countries surveyed, with 45 percent of respondents seeing it as very important, followed by better security (42 percent), climate change (37 percent), and research and development (36 percent).
  • Approximately 47 percent of respondents believe AI will contribute to the betterment of underrepresented groups, fostering a more equitable world.
  • About 78 percent of respondents agree that “government and technology companies should work together to oversee the development of AI."

24 Seven surveyed 2,128 professionals across the US, Canada, and the UK in September 2023. It found that:

  • 84 percent of respondents report that their organization utilizes AI-powered tools. However, more than half (55 percent) believe that employees in their organization possess only a basic, very limited, or no understanding of AI.
  • 70 percent of employees anticipate a surge in hiring for AI-specific roles within their organizations over the next two years. 90 percent of employees state that additional perks related to upskilling would incentivize them to remain with their current employer.
  • 61 percent of respondents state that their organization has recruited external consultants or freelancers with specialized AI skills. 59 percent of respondents believe that engaging freelancers or consultants with specialized tech skills will aid in bridging the AI knowledge gap within their companies.

Data Privacy

In December 2023, US News conducted a survey using Pollfish, polling 1,200 US adults. The findings revealed that:

  • 61 percent of respondents say they had discovered their personal data breached or compromised in at least one account at some point in 2023.
  • Almost half (44 percent) had received multiple data breach notices, while 6 percent had received notifications “too many times to count.”
  • 45 percent of respondents believe becoming a data breach victim was inevitable.


We welcome feedback on how this roundup could be most helpful in your work – please contact Alex Hart with your thoughts.

Authors

Rachel Lau
Rachel Lau is a Senior Associate at Freedman Consulting, LLC, where she assists project teams with research, strategic planning, and communications efforts. Her projects cover a range of issue areas, including technology policy, criminal justice reform, economic development, and diversity and equity...
J.J. Tolentino
J.J. Tolentino is a Senior Associate at Freedman Consulting, LLC where he assists project teams with research, strategic planning, and communication efforts. His work covers issues including technology policy, social and economic justice, and youth development.

Topics