Home

Donate

August 2023 U.S. Tech Policy Roundup

Kennedy Patlan, Rachel Lau, J.J. Tolentino / Sep 1, 2023

Rachel Lau, Kennedy Patlan, and J.J. Tolentino work with leading public interest foundations and nonprofits on technology policy issues at Freedman Consulting, LLC.

U.S. Capitol - Shutterstock

With the U.S. Congress in August recess, tech policy news from the Hill was a bit slower than usual this month. But the Executive Branch, federal agencies and states continued to push policies forward addressing artificial intelligence (AI), privacy, content moderation, and antitrust topics.

In mid-August, the Biden Administration filed briefs urging the Supreme Court to review Texas and Florida-based laws that make it illegal for social media platforms to suspend or punish users for content shared or posted. Separately, the U.S. Consumer Financial Protection Bureau announced plans to regulate data brokerage companies. The move coincided with a White House roundtable discussion on data broker practices held this month. Also in August, the Cybersecurity and Infrastructure Security Agency, the National Security Agency, and the National Institute of Standards and Technology published a fact sheet outlining roadmap recommendations to prepare the U.S. for the future of quantum capabilities and potential risks.

This month, the U.S. Equal Employment Opportunity Commission (EEOC) reached its first AI bias lawsuit settlement. However, reports revealed that the EEOC is facing roadblocks in holding AI software companies accountable for related hiring incidents. This challenge is because lawsuits often rely on job seekers or employers filing charges with the EEOC, which has proven difficult given the lack of awareness that most individuals have about AI tools’ implications on hiring outcomes.

In other AI regulation news, the Federal Election Commission voted to allow requests for public comment regarding rulemaking that would address the use of AI in political advertising. Also this month, the Department of Defense (DoD) announced the creation of a task force focused on generative AI. Dubbed “Task Force Lima,” the group will “assess, synchronize, and employ generative AI capabilities across the DoD.”

At the state level, there was a lot of movement regarding children’s rights online, including in Illinois, where the first law to protect child influencers was passed. State officials also gathered this month at the National Conference of State Legislature’s 2023 Legislative Summit, where the topic of social media protections for children arose.

While the 118th Congress was mostly in recess, policymakers and journalists alike discussed plans to reignite ideas and initiatives from 117th Congress, including a revisited antitrust fight that Rep. Ken Buck (R-CO) intends to lay the groundwork for by engaging new members who may be able to push the fight forward in future Congressional sessions. Meanwhile, Rep. Kat Cammack (R-FL) was reported to be exploring lead sponsorship for a new version of the Open Markets Act to be re-introduced through the House Energy and Commerce Committee. Critics also re-evaluated the potential concerns raised by the Kids Online Safety Act, which is expected to be on the agenda this fall.

This month, civil rights organizations continued to challenge big tech companies. X (formerly Twitter) filed a lawsuit against the Center for Countering Digital Hate over research published by the nonprofit. Shortly after, House Judiciary Committee Chairman Jim Jordan (R-OH) also requested information from the non-profit as part of his probe into alleged internet censorship by the Biden administration. Separately, non-profit organizations including Campaign Legal Center, League of Women Voters of Washington, Fix Democracy First, and NYU’s Brennan Center wrote an amicus brief in the Washington v. Meta lawsuit emphasizing the need for effective digital transparency laws.

In August, it was revealed that X is slowing access to competitor sites that were previously criticized by owner Elon Musk. Dating apps were reported to be endangering users, while using Section 230 as a defense for potential risks. TikTok was banned from New York City government devices. Meeting platform Zoom was in the hot seat over news that the company used customer data to train its AI tools without consumer consent.

Over the past few years, the Freedman Consulting, LLC, team has been routinely tracking legislation and other public policy proposals related to platforms, AI, and relevant tech policy issues in a database at techpolicytracker.org. As of September 1, this resource is no longer actively maintained, although existing content will remain accessible until at least the end of 2023. But don’t worry, our monthly roundups on Tech Policy Press will continue, and we look forward to sharing additional updates about future tech policy tracking efforts in the coming months. The tech policy tracking database is available under a Creative Commons license as well, so please feel free to build upon it. If you have questions about the Tech Policy Tracker, please don’t hesitate to reach out to Alex Hart and Kennedy Patlan.

Read on to learn more about August developments across general and generative AI policy news.

Forthcoming White House AI Executive Order Continued to Gain Momentum

  • Summary: The Biden Administration was under increased pressure this past month to move forward with an Executive Order on AI.Arati Prabhakar, director of the White House Office of Science and Technology Policy, recently commented on the Biden Administration’s seriousness in dealing with AI regulation. She stated that the executive order “is not just the normal process accelerated – it’s a completely different process.” Director Prabhakar also sent out a joint memo with Shalanda D. Young, director of the Office of Management and Budget, to executive branch departments and agencies outlining AI as a top priority for multiagency research and development efforts for the FY2025 budget. The memo acknowledged the societal consequences of advancing AI technology and embraced the federal government’s essential role in “mitigating AI risks and using AI technology to better deliver on the wide range of government missions.” The memo called on agencies to fund research and development activities driven by improved community engagements to “advance trustworthy AI technology that protects people’s rights and safety, and harness it to accelerate the Nation’s progress.”
  • Stakeholder Response: A group of civil society organizations including the Center for American Progress, Center for Democracy & Technology, and The Leadership Conference on Civil and Human Rights sent a letter to the White House urging the administration to make the AI Bill of Rights a centerpiece in the upcoming AI executive order. The letter suggested that by making the Blueprint for an AI Bill of Rights binding U.S. policy “on the use of AI tools by all federal agencies, contractors, and those receiving federal grants,” the administration could ensure that the public is protected from the harms of automated systems. The letter also offered recommendations to ensure AI systems used by the federal government are effective, safe, and nondiscriminatory.
  • Accountable Tech, AI Now, and the Electronic Privacy Information Center (EPIC) released the Zero Trust AI Governance Framework in response to self-regulatory approaches popular among top AI companies. The framework laid out overarching principles for future regulation. These included a call for policymakers to apply existing laws to the industry such as anti-discrimination, consumer protection, and competition laws alongside clarifying Section 230’s limits. The framework also suggested establishing clearly defined policies without room for subjectivity such as prohibiting facial recognition used for mass surveillance and fully automated hiring processes. Finally, the framework placed the burden on AI companies to prove that their systems are not harmful with systems subject to pre- and post-deployment harm mitigation requirements.
  • The Center of American Progress (CAP) released revisions to their comments on a National AI Strategy that included a national jobs plan and encouraged the Biden Administration to make the AI Bill of Rights binding U.S. law, effectively harnessing AI’s benefits and mitigating its societal risks and harms.
  • Led by Executive Director Daniel Colson, the Artificial Intelligence Policy Institute was launched this month with the goal of developing policy solutions for governments to mitigate the most extreme AI risks.
  • What We’re Reading: Lorena O’Neil of Rolling Stone published a story about the women who tried to warn the public about the harms of AI technology on marginalized communities and people of color. Puneet Cheema, Brian J. Chen and Amalea Smirniotopoulos published an op-ed in The Hill urging the Biden Administration to prohibit a number of AI-related practices including predictive policing and employee surveillance. Also in The Hill, Laleh Ispahani, executive director of Open Society-U.S., highlighted the need for a comprehensive national data privacy standard to mitigate the harms of AI and other emerging technologies. A recent Vox article provided an overview of rules that U.S. policymakers are considering as they continue to tackle AI regulations. The New York Times recapped the Generative Red Team Challenge held during this month’s annual DEFCON computer security conference. Similarly, in a Tech Policy Press article, Ranjit Singh, Borhan Blili-Hamelin, and Jacob Metcalf examined the use of open source red-teaming as a means of bolstering AI accountability efforts. Alex Rizzi and Lucciana Alvarez Ruiz at the Center for Financial Inclusion developed a guide for investors to better identify harmful AI gender biases in finance.

FEC Advances Deepfake Petition

  • Summary: Even with Congress in August recess, generative AI action has continued in the federal government. This month, the Federal Election Commission (FEC) unanimously moved forward a petition calling on the FEC to regulate the use of deepfakes in political ads. The petition, submitted by Public Citizen, pointed to the potential impact of hyper realistic AI-generated photo, video, and audio content depicting false information, or “deepfakes,”on the upcoming 2024 presidential election. Deepfake photo, video, and audio content all have a potential to spread dis- and mis-information as generative AI makes the content more convincing and cheaper and faster to produce. The FEC’s advancement of Public Citizen’s petition opened a 60-day public comment period that began at the end of August and indicated the potential opening of a new venue for regulating generative AI technologies. The FEC’s movement into deepfake regulation came after an earlier deadlock in June when the commission failed to reach a consensus on whether the FEC had the statutory authority to regulate AI issues. Republican Commissioner Allen Dickerson had stated in June that the FEC’s authority was “limited to instances where a campaign broadly misrepresents itself as acting on behalf of any other candidate or political party.” The movement to a public comment period does not commit the FEC to publishing rules, but it will be an opportunity to investigate the potential impact of generative AI and deepfake content on upcoming elections.
  • Stakeholder Response: In July, 50 members of Congress published a letter in support of FEC investigation into deepfakes. Various civil society organizations have also supported Public Citizen’s petition for the FEC to regulate deepfake content. Citizens for Responsibility & Ethics in Washington (CREW) sent a letter in support of the petition to the FEC, pointing to Donald Trump’s campaign’s use of deepfake audio of Elon Musk, Adolf Hitler, and others, as well as Ron DeSantis’s campaigns use of deepfake images of Trump with Dr. Anthony Fauci, as examples of the imminent use and dangers of deepfake content on elections. Relatedly on the journalism front, a group of media and news organizations published an open letter urging elected officials globally to strengthen regulations surrounding generative AI and copyright, including rules on datasets for training and notifying consumers when generative AI is used to create news content.
  • What We’re Reading: A study at University College London published in August found that humans can only detect AI-generated speech 73 percent of the time, with detection rates improving minimally after people received training on how to recognize deepfake speech. In other news on generative AI, Anna Lenhart published a roundup of generative-AI-related federal legislative proposals and Ariel Soiffer wrote about the impacts of generative AI on copyright regulations in Tech Policy Press. Also in Tech Policy Press, Abhishek Gupta explored the emergence of generative AI as shadow AI. Regarding the intersection between AI and journalism, the Associated Press released guidelines on the use of generative AI for reporting, and The Verge discussed Google Chrome’s new generative-AI-powered summary tool. Finally, Axios reported on the launch of the Center for News, Technology & Innovation (CNTI), an initiative led by executives with media and tech experience that will focus on “addressing global internet issues, such as disinformation, algorithmic accountability, and the economic health of the news industry.”

New Legislation and Policy Updates

  • Child Online Safety Modernization Act (H.R.5182, sponsored by Rep. Ann Wagner (R-MO)): This bill would “modernize and enhance” the National Center for Missing and Exploited Children's (NCMEC) CyberTipline, a national record for the reporting of CSAM on the internet maintained by a congressionally-mandated nonprofit. The bill would require platforms to report to the CyberTipline, expand details required in the report, and expand all mentions of “child pornography” to “child sexual abuse material” in all U.S. federal statutes.
  • Calling on the United States to champion a regional artificial intelligence strategy in the Americas to foster inclusive artificial intelligence systems that combat biases within marginalized groups and promote social justice, economic well-being, and democratic values (H.Res.649, sponsored by Rep. Adriano Espaillat (D-NY)): This resolution urges the U.S. to develop and implement a safe and responsible Regional AI Strategy in the Americas built on the Blueprint for an AI Bill of Rights. The strategy urged under the resolution would make the safe design, use, development, and deployment of AI in the Western Hemisphere a strategic priority for U.S. domestic and foreign policy. It would also ensure AI governance, investment, and innovation in the Western Hemisphere would prioritize fairness, accountability, trustworthiness, privacy, and the protection of individual rights and democratic values.

Public Opinion Spotlight

From July 18-21, 2023, an Artificial Intelligence Policy Institute/YouGov poll surveyed 1,001 registered U.S. voters and revealed that the majority of U.S. voters are concerned about the risks presented by AI and are in favor of federal AI regulation. Specific key findings include:

  • 72 percent of voters would prefer that AI development slowed down compared to just 8 percent who preferred to see development speed up.
  • 86 percent of voters believed “AI could accidentally cause a catastrophic event,” and 70 percent agreed that “mitigating the risk of AI related extinction should be a global priority alongside other risks like pandemics and nuclear war.”
  • 82 percent of voters did not trust tech companies to self-regulate the AI industry, with 56 percent of voters supporting a federal agency regulating the technology.

Pew Research Center surveyed 11,201 U.S. adults from July 31 to Aug. 6, 2023 to understand public attitudes about AI and its daily uses. The researchers found that a growing number of Americans were concerned about the role AI is playing in daily life. Key findings include:

  • 52 percent of Americans said they felt more concerned than excited about the increased use of AI in daily life. This share is up 14 percentage points when compared to December 2022 when 38 percent expressed this view.
  • Out of the 33 percent of adults who have heard a lot about AI, 47 percent were more concerned than excited about it (up 16 percentage points from December 2022). Similarly, out of the 56 percent of adults who have heard a little about AI, 58 percent were more concerned than excited, up 19 percentage points from December.
  • Opinions about whether AI would help or hurt specific areas of daily life were more mixed. For example, 49 percent of respondents said that AI helps more than hurts when trying to find products and services online, while 53 percent of respondents said that AI hurts more than helps when individuals are trying to keep personal information private.

From August 14-15, 2023, a Reuters/Ipsos poll surveyed 1,005 adults including 443 Democrats, 346 Republicans, and 137 independents on national security concerns, China, and TikTok. They found that:

  • 47 percent of respondents at least somewhat supported a ban on TikTok in the United States, with 36 percent opposing a ban.
  • 58 percent of Republicans favored a TikTok ban, compared to 47 percent of Democrats.

A Los Angeles Times/Leger poll from July 28-30, 2023 surveyed U.S. consumers to better understand their concerns related to their jobs and whether they see a need for AI regulation or AI-related disclaimers. The poll found that:

  • 45 percent of Americans were concerned that AI will affect their line of work compared to 29 percent who were not concerned.
  • 73 percent of respondents supported disclaimers on AI-generated content compared to 12 percent who opposed them.

A poll was conducted by Ipsos for C.S. Mott Children’s Hospital among 2,099 adults who were parents of at least one child aged 0-18 years living in their household in February 2023. The poll found that:

  • Overuse of devices/screen time (67 percent), social media (66 percent), and internet safety (62 percent) were the top three child health concerns for parents.
  • These tech-related concerns outranked other concerns like depression/suicide (57 percent), school violence (49 percent), and guns/gun injuries (47 percent).

From August 17-21, 2023, Yahoo Entertainment/YouGov surveyed 1,665 U.S. adults about their opinions towards the ongoing SAG-AFTRA strikes and AI-related content.

  • 61 percent of respondents said it would be a "bad idea" to include digital replicas of actors generated by AI in movies and TV shows.
  • 63 percent of respondents thought it was a "bad idea" for Hollywood to create AI generated movie and television scripts.
  • 55 percent of respondents supported both actors and writers in their labor disputes.

A recent Certified Financial Planner Board of Standards poll conducted on July 11, 2023 among 1,153 adults indicated that nearly one-in-three investors would use AI as a financial advisor.

  • 31 percent of investors said that they would be comfortable implementing financial advice from a generative AI program without verifying those recommendations.
  • Young investors were more wary about generative AI financial outputs than older investors: 62 percent of investors ages 45 and older were "very satisfied" with generative AI financial advice, versus 38 percent of investors under 45.

We welcome feedback on how this roundup could be most helpful in your work – please contact Alex Hart and Kennedy Patlan with your thoughts.

Authors

Kennedy Patlan
Kennedy Patlan is a Project Manager at Freedman Consulting, LLC, where she assists with strategic development, project management, and research. Her work covers technology policy, health advocacy, and public-private partnerships.
Rachel Lau
Rachel Lau is a Senior Associate at Freedman Consulting, LLC, where she assists project teams with research, strategic planning, and communications efforts. Her projects cover a range of issue areas, including technology policy, criminal justice reform, economic development, and diversity and equity...
J.J. Tolentino
J.J. Tolentino is a Senior Associate at Freedman Consulting, LLC where he assists project teams with research, strategic planning, and communication efforts. His work covers issues including technology policy, social and economic justice, and youth development.

Topics