Kennedy Patlan, Rachel Lau, and Carly Cramer are associates at Freedman Consulting, LLC, where they work with leading public interest foundations and nonprofits on technology policy issues.
As the 117th Congress came to a close, December was an opportunity to look back at the U.S. tech policy wins of 2022. The year was eventful for tech policy, with major developments in artificial intelligence, data, and healthcare privacy, particularly within federal agencies and the White House:
- 2022 began with the Senate Judiciary Committee marking up the Open App Markets Act (S.2710) and the American Innovation and Choice Online Act (S.2992), advancing both pieces of landmark legislation to the full Senate.
- In April, the White House released “A Vision for Equitable Data: Recommendations from the Equitable Data Working Group,” which outlined a roadmap for ways that the government can more effectively measure equity outcomes and increase diverse representation in data collection.
- In May, the Department of Justice and the Equal Employment Opportunity Commission issued guidance on algorithms and AI’s impact on disability discrimination in hiring.
- Following the June Supreme Court ruling overturning Roe v. Wade, the White House and several federal agencies issued guidance regarding privacy protections, including the Department of Health and Human Services and the Federal Trade Commission.
- In August, President Biden signed the $280 billion CHIPS and Science Act into law, which promises opportunities for domestic semiconductor production as well as funding for R&D across technology issues including artificial intelligence. The bill also established a new Directorate for Technology, Innovation, and Partnerships within the National Science Foundation.
- In September, the Senate confirmed Arati Prabhakar as the new director of the White House’s Office of Science and Technology Policy (OSTP). The White House also released six “Principles for Enhancing Competition and Tech Platform Accountability” following a listening session with field experts and practitioners.
- In October, the White House released its Blueprint for an AI Bill of Rights, which outlines five principles for the “design, use, and deployment of artificial intelligence,” and announced a series of federal agency actions. That month, President Biden also released an Executive Order on Enhancing Safeguards for United States Signals Intelligence Activities (also known as Privacy Shield 2.0), which addressed U.S. plans for implementing the European Union-U.S. Data Privacy Framework.
- In November, the Department of Commerce and State announced upcoming 2023 plans for an executive order to limit the use of spyware, and in December, the administration took a stand on major Section 230 litigation.
- Finally, in December, the 117th Congress’s $1.7 trillion omnibus bill became law, including tech policy bills like the Merger Filing Fee Modernization Act (S.228 / H.R. 3843) and the INFORM Act (S.936 / H.R. 5502).
We look forward to what the new 118th Congress brings with regard to legislative proposals on privacy, reform of Section 230, antitrust, and other important technology policy issues. You can continue to monitor and track passed and pending legislation via techpolicytracker.org, where we maintain a comprehensive database of legislation and other public policy proposals related to platforms, artificial intelligence, and relevant tech policy issues. In early 2023, we will be archiving legislation from the 117th Congress to make room for the work of the 118th Congress.
Read on to learn more about December U.S. tech policy highlights from the White House, Congress, and beyond.
Platform Regulation Hitches a Ride on the Omnibus
- Summary: After the 117th Congress approved an $1.7 trillion omnibus text, President Biden signed the spending bill on December 29, 2022. The final piece of legislation came with a few wins but many notable omissions for tech advocates. Among the tech bills omitted from the omnibus were antitrust proposals, like the Open App Markets Act (S.2710) and American Innovation and Choice Online Act (S. 2992), and privacy bills, like the Kids Online Safety Act (S.3663), and the Children’s Online Privacy Protection Rule. Here are the tech and related antitrust bills that did make it into the spending package:
- The No TikTok on Government Devices Act (S.1143), sponsored by Sen. Josh Hawley (R-MO): The bill prohibits the use of TikTok on executive agency devices due to national security concerns with the company’s ties to the Chinese government. The move comes as many states, such as Maryland, South Dakota, South Carolina, Alabama, and Texas, develop their own actions to ban TikTok from government devices.
- The Merger Filing Fee Modernization Act (H.R. 3843 / S.228), sponsored by Sen. Amy Klobuchar (D-NY), Sen. Chuck Grassley (R-IA), Rep. Joe Neguse (D-CO),Rep. Victoria Spartz (R-IN), Rep. David Cicilline (D-RI), Rep. Ken Buck (R-CO), Rep. Jerrold Nadler (D-NY), and Rep. Chip Roy (R-TX): The bill raises the cost of filing fees on large mergers to generate funding that can be used for antitrust enforcement. An amendment to the omnibus guarantees immediate support for antitrust enforcers, granting them access to an estimated $1.4 billion in additional fees over the next five years.
- State Antitrust Enforcement Venue Act (H.R. 3460 / S. 1787), sponsored by Rep. Ken Buck (R-CO) and Sen. Mike Lee (R-UT): The bill, also known as the State AG Venue Act, grants state attorneys general more agency over how antitrust enforcement cases are heard by limiting “the transfer and consolidation of antitrust cases that are brought by states in federal court.”
- Foreign Merger Subsidy Disclosure Act (H.R. 5639 / S. 4322), sponsored by Rep. Scott Fitzgerald (R-WI) and Sen. Tom Cotton (R-AR): This bill requires companies and organizations submitting pre-merger notifications to disclose information on subsidies received from “foreign entities of concern,” as defined by the Infrastructure Investment and Jobs Act.
- The INFORM Act, (H.R. 5502 / S.936) sponsored by Rep. Jan Schakowsky (D-IL) and Sen. Richard Durbin (D-IL): The INFORM Act requires online marketplaces like Amazon and eBay to “collect, verify, and disclose” certain information from high-volume third-party sellers.
- Stakeholder Response: Policymakers have reacted to the omnibus news, including Sen. Amy Klobuchar (D-MN), who released a statement expressing her excitement for the Merger Filing Fee Modernization Act’s inclusion. Sen. Klobuchar also told NBC’s Meet the Press that she is looking ahead to the 118th Congress to advance technology policy reform: “We are lagging behind. It is time for 2023, let it be our resolution, that we finally pass one of these [tech regulation] bills.” Meanwhile, Sen. Joe Manchin (D-WV) expressed frustration with the antitrust omissions. Advocacy group Fight for the Future shared similar sentiments, but was happy to see that Congress did not include the Kids Online Safety Act in the final package. Other technology advocates also reflected on what the spending bill means for antitrust reform.
- What We’re Reading: Vox published a piece on TikTok’s algorithm and response to the omnibus. The Hill wrote about the advocacy pushes and roadblocks that led to the final inclusions and exclusions from the spending bill. Axios’s “Tech legislation’s 2022 scorecard” provides an overview of the tech provisions included and excluded from the omnibus.
Passage of the NDAA Includes Key AI Provisions
- Summary: After deliberation, the House and Senate reconciled their versions of the 2023 National Defense Authorization Act, allocating $858 billion in spending. The final version of the bill, which was passed into law on December 15th, contains a number of key provisions related to artificial intelligence. Notably, a modified version of S.1353, the Advancing American AI Act, sponsored by Sens. Gary Peters (D-MI) and Rob Portman (R-OH), aims to streamline the implementation process for new AI technologies throughout the federal government while maintaining alignment with the protection of privacy, civil rights, and civil liberties. Additionally, the final version of the bill requires collaboration across the Department of Defense to produce a comprehensive assessment of the threat posed by adversaries’ use of artificial intelligence, alongside a five-year roadmap and implementation plan for the rapid adoption of AI systems. Beyond the Department of Defense, the Office of Management and Budget will issue new guidance for AI procurement and protection across the entire federal government. Furthermore, the legislation authorizes an increase of $50 million for artificial intelligence systems development at the U.S. Cyber Command and $75 million for the Defense Advanced Research Projects Agency (DARPA) to implement recommendations issued by the National Security Commission on Artificial Intelligence (NSCAI).
- Stakeholder Response: Members of Congress on both sides of the aisle celebrated the passage of the NDAA. Sen. Mark Warner (D-VA) celebrated funding increases for DARPA and for the implementation of new AI technologies in the U.S. Cyber Command Center, while Sen. Rob Portman (R-OH), who cosponsored the original Advancing American AI Innovation Act (S. 3175) with Sen. Jacky Rosen (D-NV), stated that the provisions included in the NDAA would strengthen government rules to keep the use of AI technology secure and protected. According to Alexandra Seymour, a former Department of Defense Official who now serves as a fellow for the Center for New American Security’s Technology and National Security Program, this year’s NDAA breaks from previous authorizations which focused on broad principles for the adoption of AI technology. This year’s bill places a heavier focus on moving toward the active use of such technologies.
- What We’re Reading: Nextgov published an overview of the key cybersecurity and technology provisions included in the final version of the NDAA. According to Federal Times, the Defense Intelligence Agency will soon release a new artificial intelligence strategy focused on workforce recruitment and retention. In Tech Policy Press, Craig Aaron, CEO of Free Press and Free Press Action, wrote about the Journalism Competition and Preservation Act (JCPA, S. 673), another bill slated for potential inclusion in the NDAA that ultimately was excluded.
Biden Administration Continues Involvement in Section 230 Deliberations
- Summary: In Gonzalez vs. Google, the Supreme Court’s first opening to rule on Section 230 of the Communications Decency Act, the Department of Justice filed a brief in early December arguing that although federal law protects platforms from responsibility for blocking or removing third-party content, it does not shield platforms from lawsuits related to their own conduct through recommendation algorithms. By siding against Google in the case, the Biden administration continued its calls to hold search and social media platforms accountable for the content they promote. The court will hear arguments in February.
- Stakeholder Response: The debate over Section 230 continues to heat up as Gonzalez vs. Google moves along, with platforms and industry groups like Google and the Chamber of Progress defending Section 230 protections by arguing that weakening the law would make content moderation harder, rather than easier. Advocates for Youth, a nonprofit advocating for affirming sexual health and equity for youth, also signed onto the Chamber’s letter in support of Section 230. Additionally, Free Press Action filed in support of Section 230, arguing that narrowing Section 230 protections could have a chilling effect on online expression. In contrast, Sen. Josh Hawley (R-MO) filed an amicus brief arguing that Section 230 enables platforms to “escape any real accountability” for content moderation. A bipartisan coalition of state attorneys general, including California Attorney General Rob Bonta, also filed an amicus brief in support of greater liability for social media companies in relation to their recommendations algorithms and third-party content. The Anti-Defamation League also cautioned the Supreme Court against interpreting Section 230 too broadly.
- What We’re Reading: The Bipartisan Policy Center published a brief on Section 230, algorithmic content moderation, Gonzalez vs. Google, and the potential implications of the case. The Washington Examiner wrote about the potential for House Republicans to amend or remove Section 230 in the new Congress. Axios published an overview of the tech industry’s response to upcoming Section-230-related cases.
New Legislation and Policy Updates
- Platform Accountability and Transparency Act (S. 797, sponsored by Sen. Chris Coons (D-DE) and Sen. Rob Portman (R-OH)): The bill, which was originally announced in December 2021, would grant independent researchers and the public access to previously undisclosed data sets from social media companies. This year’s version includes new changes, as reported by the Washington Post, such as the addition of privacy limits to researchers’ access to sensitive consumer data, access for researchers affiliated with nonprofits, relaxation of implementation requirements for the Federal Trade Commission, and a higher threshold for covered platforms’ monthly users. In Tech Policy Press, John Perrino also noted that the bill’s provisions are also extended to cover virtual and augmented reality platforms.
- Averting the National Threat of Internet Surveillance, Oppressive Censorship and Influence, and Algorithmic Learning by the Chinese Communist Party (ANTI-SOCIAL CCP) Act (S. 5245, sponsored by Sen. Marco Rubio (R-FL)): The ANTI-SOCIAL CCP Act, introduced in December, seeks to “block and prohibit all transactions from any social media company in, or under the influence of, China, Russia, and several other foreign countries of concern” to protect Americans and the United States from undue foreign influence, surveillance, propaganda, and censorship. The act would, most notably, ban TikTok from the U.S. marketplace entirely. Rep. Mike Gallagher (R-WI) also introduced a House version of the bill in mid-December. FCC Commissioner Brendan Carr applauded the ANTI-SOCIAL CCP Act, citing national security risk and privacy risks for Americans. In contrast, trade association NetChoice stood in opposition to the bill, arguing that it sets a precedent giving the U.S. government discretion to shut down apps and other digital products. In Tech Policy Press, Justin Hendrix queried the phrase “digital fentanyl,” used by Carr and Rep. Gallagher to describe what they regard as TikTok’s dangers.
- The Tech Safety for Victims of Domestic Violence, Dating Violence, Sexual Assault and Stalking Act (H.R. 9544, sponsored by Reps. Debbie Lesko (R-AZ) and Anna Eshoo (D-CA) in the House and Sen. Ron Wyden (D-OR) in the Senate): The Tech Safety for Victims of Domestic Violence, Dating Violence, Sexual Assault and Stalking Act, introduced in mid-December, seeks to address the ways that social media platforms, devices, and spyware apps contribute to abuse. The bill funds clinics on tech-enabled abuse at the Department of Justice (DOJ) and DOJ grants for nonprofits and education institutions to support organizations focused on victims of tech-enabled abuse.
- The Platform Integrity Act (sponsored by Reps. David Cicilline (D-RI) and Ken Buck (R-CO)): The bill, introduced in late December 2022, would define the Communications Decency Act’s Section 230 such that Section 230’s protections for tech platforms against lawsuits do not apply to any content that is “affirmatively promoted or suggested to their users.”
Public Opinion Spotlight
Morning Consult conducted a survey among 2,212 U.S. adults from November 11-26, 2022 on consumers’ opinions and expectations on artificial intelligence. They found that:
- 24 percent of U.S. adults say they know “exactly” what AI is, in comparison to 40 percent of Gen Z and 43 percent of adults working in tech
- 52 percent of U.S. adults believe AI will change their life in a negative way
- 44 percent of U.S. adults believe AI will have a positive impact on innovation in science
- 43 percent of U.S. adults believe AI will have a positive impact on health care innovation
- 36 percent of U.S. adults believe AI will have a positive impact on innovation in education
- 38 percent of U.S. adults believe AI will have a negative impact on employment in major companies
- 34 percent of U.S. adults believe AI will have a negative impact on employment at small businesses
- 67 percent of U.S. adults express some concern on foreign powers using AI against U.S. interest and job loss across industries
- 65 percent of U.S. adults express some concern on how AI may impact their personal data privacy
- 46 percent of Black adults express concern about racial discrimination in AI application
Morning Consult also asked U.S. voters whether they support a ban on Chinese-based social media platforms in the United States in a poll from December 16-19, 2022 of 2,001 registered voters. They found that:
- “53 percent of voters support a ban on Chinese-based social media platforms in the United States, while a slightly higher share (59 percent) support banning the platforms from government-issued devices”
- “The U.S. ban had strong support among baby boomer voters, with 2 in 3 backing it, while the proposal was more divisive among younger generations. Gen Z voters were slightly more likely to oppose (41 percent) than support (32 percent) such a ban, while another 28 percent didn’t know or had no opinion. Millennial voters were roughly split on the ban, with 39 percent supporting it and 34 percent opposing it.”
– – –
We welcome feedback on how this roundup and the underlying tracker could be most helpful in your work – please contact Alex Hart and Kennedy Patlan with your thoughts.