Home

January 2023 U.S. Tech Policy Roundup

Kennedy Patlan, Rachel Lau, Carly Cramer / Feb 1, 2023

Kennedy Patlan, Rachel Lau, and Carly Cramer are associates at Freedman Consulting, LLC, where they work with leading public interest foundations and nonprofits on technology policy issues.

Jan. 10, 2023: Rep. Jim Jordan, R-OH, participates in House debate on H.Res.12: "Resolution Establishing a Select Subcommittee on the Weaponization of the Federal Government, as a Select Investigative Subcommittee of the Committee on the Judiciary".

It’s only the beginning of the new year, but January 2023 quickly kicked off with lots of tech policy action across all levels of the U.S. government. President Joe Biden published an op-ed encouraging Congress to take bipartisan action to “hold Big Tech accountable” highlighting privacy, Section 230 reform, and increasing competition. The White House’s Office of Science and Technology Policy released a roadmap for researchers outlining the federal government’s research priorities related to information integrity topics. On January 4, President Biden re-issued a nomination for Gigi Sohn to fill a vacant seat on the Federal Communications Commission. Sohn was first nominated for the role in October 2021 – she is likely to face another round of confirmation hearings due to Republican resistance.

At the state level, Virginia and California have begun enacting privacy laws, with Colorado, Connecticut, and Utah not far behind. PoliticoPro reported that an increasing number of states (including New York, Oregon, Washington, Kentucky, Massachusetts, Mississippi, Tennessee, Oklahoma, and Indiana) have begun introducing data privacy bills this month in the absence of federal lawmaking. The Washington Postalso reported on states’ deliberations over additional data legislation. More states have also begun banning the use of TikTok on government devices, leaving the tech company on the defensive. There is reason to believe states will be where the most significant action on tech policy will take place over the next two years.

In corporate news, Elon Musk has decided to lift a ban on political ads on Twitter, while Meta said it would reinstate former President Trump's accounts on Facebook and Instagram. Meta’s other subsidiary company, WhatsApp, was hit with nearly $6 million in fines by Ireland’s Data Privacy Commissioner for this month for violations of the European Union’s General Data Protection Regulation (GDPR). Meta is also facing fines from the EU for violations of GDPR related to its terms of service, which forced customers to opt in to personalized advertising. Meanwhile, the Department of Justice led eight states in filing an antitrust lawsuit that accuses Google of abusing monopoly power in its advertising. Also this month, the Supreme Court determined that Meta could act on a lawsuit against NSO Group Technologies, an Israeli spyware company that Meta has accused of unlawfully accessing WhatsApp servers to surveil approximately 1,400 users. Sam Altman, the creator of ChatGPT visited D.C. lawmakers this month, while the House Energy and Commerce Committee announced that TikTok CEO Shou Zi Chew is due to testify in March.

Finally, this month marked the start of the 118th Congress, which kicked off with what appears to be a renewed focus on technology policy. The below analysis is based on techpolicytracker.org, where we maintain a comprehensive database of legislation and other public policy proposals related to platforms, artificial intelligence, and relevant tech policy issues.

Read on to learn more about January U.S. tech policy highlights from the White House, Congress, and beyond.

118th Congress Kicks Off with Tech Policy Oversight at the Forefront

  • Summary: Once the 118th Congress finally got its start as Rep. Kevin McCarthy (R-CA) became the official Speaker of the House, the new Speaker was quick to make his Big Tech position known. Speaker McCarthy, alongside House Republicans, approved plans to launch the Select Subcommittee on the Weaponization of the Federal Government, which will investigate whether the federal government and big tech companies are actively working to censor, harass, or minimize conservative voices. The subcommittee, which will be under the House Judiciary Committee, will be led by Rep. Jim Jordan (R-OH) and will include five Democratic members. In President Biden’s op-ed for the Wall Street Journal, he noted that the country has “heard a lot of talk about creating committees” but encouraged both sides of the Congressional aisle to work together to keep Big Tech in check. The President outlined four broad principles for congressional action in his piece, including establishing federal protections for Americans’ privacy, reforming Section 230, requiring algorithmic transparency from Big Tech, and increasing competition in the tech industry.
  • Stakeholder Response: Judiciary Committee Chair, Rep. Jim Jordan (R-OH) said in a floor speech that the panel wants the “double standard to stop” and emphasized that the committee’s primary goal is to protect the First Amendment. Meanwhile, House Democrats are still weighing their next steps regarding committee representation, with House Minority Leader Hakeem Jeffries (D-NY) saying, “We’re still evaluating the dynamics as it relates to the select committee on insurrection protection.” Rep. Jerry Nadler (D-NY) had stronger words, stating that the subcommittee was “designed to inject extremist politics into our justice system and shield the MAGA movement from the legal consequences of their actions.” Advocacy groups also commented on the matter: Evan Greer, Executive Director of Fight for the Future said of the committee, “It’s very much about the grievance politics of complaining about tech companies’ restriction over speech while doing absolutely nothing to reduce their power.”
  • What We’re Reading: The Verge reports on what Rep. Kevin McCarthy’s speakership might mean for Big Tech. The Washington Post covered a few stories, including what to expect from the 118th Congress, what House Speaker McCarthy may mean for Silicon Valley, and states to watch in 2023 who may lead the way on tech policy legislation. An op-ed in The Hill examined the potential for 2023 to become the year tech privacy legislation gets passed.

A New Risk Management Framework for AI

  • Summary: On January 26, the National Institute of Standards and Technology (NIST) released its new AI Risk Management Framework (AI RMF), the culmination of 18 months of collaboration with private and public sector stakeholders. The AI RMF, which was issued alongside a Playbook, Explainer Video, and Roadmap, aims to promote the trustworthiness and security of AI systems. It identifies four key methods for mitigating risk: govern, map, measure and manage. The first of these, govern, is the core of the recommendations, and urges stakeholders to create a culture of risk-management through sound processes, structures, and policies. NIST has been developing this framework since it was Congressionally mandated through the passage of the National AI Initiative Act of 2020, and had previously released two initial drafts of the document for public review.
  • Stakeholder Response: The framework was met with general support from leaders in business and civil society, although some argued that it does not go far enough. Alexandra Reeve Givens, the President and CEO of the Center for Democracy and Technology, spoke at NIST's launch event, arguing that while "the framework is a good start… we're going to have to get a lot more specific to help people actually see themselves in these guidance documents and know the rules of the road." Alondra Nelson, the Deputy Director for Science and Society at the White House Office of Science and Technology Policy (OSTP), also spoke at the event, highlighting extensive cooperation between the White House OSTP in the creation of the framework. She noted that the collaboration had enabled greater cohesion between the new framework and the Blueprint for an AI Bill of Rights that the White House released last year. Rep. Frank Lucas (R-OK), Chair of the House Committee on Science and Technology, celebrated the framework for its advancement of AI with an emphasis on transparency, privacy, and reliability.
  • What We’re Reading: Rep. Ted Lieu (D-CA) wrote in The New York Times calling for a dedicated federal agency to regulate AI and celebrating the upcoming AI Risk Management Framework. Jessica Newman, Director of the UC Berkeley Center for Long-Term Cybersecurity's AI Security Initiative and the Co-Director of the UC Berkeley AI Policy Hub, wrote "A Taxonomy of Trustworthiness for Artificial Intelligence" as a compliment to the NIST release. In Tech Policy Press, Newman summarized five key takeaways from the newly released framework.

Section 230 Briefs Ramp Up Before February SCOTUS Arguments

  • Summary: As the Supreme Court’s hearings of Gonzalez v. Google and Twitter v. Taamneh approach next month – arguments are scheduled for February 21 and 22 -- an onslaught of amicus briefs has flooded the court from a wide range of stakeholders. Gonzalez v. Google will be the Supreme Court’s first ruling on Section 230, which protects platforms from lawsuits related to third-party content. In Gonzalez, the plaintiff argues that YouTube’s algorithms recommending Islamic State videos should make the company liable for the death of an American killed in a terrorist attack by the Islamic State in Paris, as YouTube assisted ISIS in spreading its content to users. Twitter v. Taamneh also considers a similar case and argument in which the family of a victim of a 2017 ISIS attack sued Twitter for ISIS’s use of the platform. Taamneh could potentially reshape social media platforms’ relationship to content moderation and the Anti-Terrorism Act. The stakes are high in the cases, as they have the potential to reshape content moderation and the role of platforms on the internet. Additionally, the Supreme Court asked the Department of Justice and the White House whether they believe that it should review social media laws in Florida and Texas – the trade group Chamber of Progress weighed in, calling on the court to take on the two cases.
  • Stakeholder Response: Almost all amicus briefs filed by high profile tech companies, advocacy groups, and think tanks have been in support of the respondent, Google, in Gonzalez v. Google. Their arguments, however, have varied in their treatment of how Section 230 should be reinterpreted. Advocacy groups and think tanks weighing in include TechFreedom, Center for Democracy & Technology, Bipartisan Policy Center, ACLU, Electronic Privacy Information Center, Free Press Action, Chamber of Progress, Electronic Frontier Foundation, Progressive Policy Institute, NYU Stern’s Center for Business and Human Rights, and others. A group of national security experts submitted a brief in support of affirmance, arguing that online platforms’ content moderation is crucial to addressing online threats and incentivizing platforms to remove dangerous content, while another group of former national security officials, led by Georgetown Law’s Mary McCord, filed a brief in favor of neither party, arguing the Court of Appeals was wrong to invoke Section 230(c)(1) to deny them the petitioners their day in court.Sen. Ron Wyden (D-OR) and former Rep. Christopher Cox (R-CA) also submitted a brief in support of Google, asserting that targeted algorithmic recommendations are no different than the other methods of curating and publishing content that Section 230 aims to protect. Finally, Meta, Twitter, Yelp, and other tech companies have rallied behind Google, pressing the Supreme Court to maintain the liability shield and prevent future litigation against recommendation methods with and without the use of algorithms or machine learning.
  • What We’re Reading: The Wall Street Journal provides an overview of Google’s brief and the potential consequences of the case. Quartzwrote about Section 230 in the context of Seattle Public School’s lawsuit against Meta, ByteDance, Alphabet, and Snap for their impact on student mental health. Hiromitsu Higashi at Tech Policy Pressdiscussed the role of “right, capacity, and will” in the Section 230 and content moderation debate. Ars Technica reports on the Supreme Court’s solicitation of the White House’s opinions on social media laws. The Washington Postwrites about TikTok, another platform under scrutiny that will be affected by the Section 230 debate. Finally, NPRreports on the potential for social media platforms to be held accountable for online drug dealers in the event of a change to Section 230.

New Legislation and Policy Updates

  • See Something, Say Something Online Act (sponsored by Senator Joe Manchin (D-WV) and John Cornyn (R-TX)): The two senators announced a re-introduction of this bill, (formerly introduced as S.27 in the 117th Congress, new bill number not yet released). The bill would amend Section 230 to require tech companies to report illegal activity taking place on their platforms.
  • No TikTok on United States Devices Act (sponsored by Congressman Ken Buck (R-CO) and Senator Josh Hawley (R-MO)): This bill would employ a nationwide ban of TikTok, including prohibiting transactions between U.S. companies with TikTok parent company ByteDance. It would also require the Director of National Intelligence to submit a report to Congress outlining national security threats regarding the use of TikTok. This bill builds upon the No TikTok on Government Devices Act (S.1143), which was passed in the last Congress to prohibit TikTok from devices used by federal agencies. The bill may go to a vote in the House Foreign Affairs Committee next month.
  • Protecting American Intellectual Property Act (S.1294, sponsored by Sen. Chris Van Hollen (D-MD)): This bill from the last Congress, which was signed into law on January 5, 2023, “imposes sanctions on foreign individuals and entities involved in the theft of trade secrets belonging to a US individual or entity.”
  • Protecting Speech from Government Interference Act (H.R. 140, sponsored by Rep. James Comer (R-KY)): This bill would prohibit presidential administration officials from promoting the censorship of speech and prevent government officials from pressuring social media companies to enact censorship policies.
  • Bills Introduced in the Final Days of the 117th Congress: These three bills were introduced by a bipartisan coalition of lawmakers in the final days of the 117th Congress and will not advance unless they are reintroduced in the 118th Congressional period.
  • Platform Integrity Act (H.R. 9695, sponsored by Rep. David Cicilline (D-RI)): This bill aimed to amend the Communications Decency Act’s Section 230 such that Section 230’s protections for tech platforms against lawsuits do not apply to any content that is “affirmatively promoted or suggested to their users.”
  • Stopping Unlawful Negative Machine Impacts Through National Evaluation Act (S.5351, sponsored by now retired Sen. Rob Portman (R-OH)): This bill would have clarified that existing civil rights laws apply to decisions made by AI systems.
  • Platform Accountability and Transparency Act (S. 5339, sponsored by Sen. Christopher Coons (D-DE)): This bill would have forced increased transparency around social media companiesby granting independent researchers and the public access to previously undisclosed data sets from social media companies.

Public Opinion Spotlight

Morning Consult has been tracking consumer opinion monthly since August 2022 to understand attitudes towards regulating Big Tech. The monthly survey of 627 adults found that:

  • 45 percent of Democrats, 39 percent of independents, and 34 percent of Republicans support stronger government regulation of tech companies

In addition, a Morning Consult poll conducted December 27, 2022 to January 1, 2023 with 2,202 U.S. adults found that:

  • 64 percent agree that major technology companies have too much power, with only 17 percent disagreeing

The survey also asked the respondent to share their opinions on the effects of breaking up major tech companies. They found that:

  • 48 percent think it would make the market more competitive
  • 46 percent think it would be better for small businesses
  • 41 percent believe it would increase innovation
  • 36 percent believe that customer and user data privacy would be better
  • 34 percent believe that safety of children online would improve
  • 28 percent of people think that there would be more false information on digital platforms, whereas 25 percent think there would be less false information

Published this month, Morning Consult and Chamber of Progress also conducted a survey among 2,006 midterm voters between November 17-18, 2022 regarding voter priorities. They discovered that:

  • In deciding who to vote for in the midterm elections, only 1 percent of midterm voters believed that regulating technology companies was the most important issue
  • When asked about technology-related priorities, 35 percent of respondents believed protecting consumers from scams/malware is a top issue that they want the Biden Administration to focus on in the next two years
  • 27 percent of respondents believe another top issue for the Administration to focus on is enacting regulations to protect consumer privacy online
  • 22 percent of respondents believe there should be priority on enacting rules to prevent people's online data from being used to discriminate against them
  • 40 percent of respondents think the Biden Administration should enact technology regulations to protect consumers, while making sure that technology apps and services don't lose useful functions
  • 23 percent of respondents think the Biden Administration should focus on bringing more tech jobs and opportunity to people and communities instead of additional technology regulation
  • 67 percent of respondents would like to see Congress prioritize regulations that would protect consumers from spam, malware, ransomware, unwanted surveillance, and other technology abuses
  • 66 percent of respondents think social media services should remove more violent, illegal, harassing, and harmful content
  • 72 percent of respondents think social media companies should be allowed to remove individuals from their platforms for promoting violent, harassing, and harmful content

- - -

We welcome feedback on how this roundup and the underlying tracker could be most helpful in your work – please contact Alex Hart and Kennedy Patlan with your thoughts.

Authors

Kennedy Patlan
Kennedy Patlan is a Project Manager at Freedman Consulting, LLC, where she assists with strategic development, project management, and research. Her work covers technology policy, health advocacy, and public-private partnerships.
Rachel Lau
Rachel Lau is a Senior Associate at Freedman Consulting, LLC, where she assists project teams with research, strategic planning, and communications efforts. Her projects cover a range of issue areas, including technology policy, criminal justice reform, economic development, and diversity and equity...
Carly Cramer
Carly Cramer is an Associate at Freedman Consulting, LLC, where she assists project teams with communications, policy research, and coalition support. Her work covers public health, artificial intelligence policy, and public-private partnerships.

Topics