Home

October 2022 U.S. Tech Policy Roundup

Kennedy Patlan, Alex Kennedy, Rachel Lau / Nov 1, 2022

Kennedy Patlan, Alex Kennedy, and Rachel Lau are associates at Freedman Consulting, LLC, where they work with leading public interest foundations and nonprofits on technology policy issues.

Elon Musk's acquisition of Twitter dominated tech headlines in October, 2022. Břetislav Kovařík/Wikimedia

October saw movement in tech policy across all three branches of the federal government. The White House Office of Science and Technology Policy (OSTP) released its Blueprint for an AI Bill of Rights, and the Supreme Court agreed to hear a pair of cases about content moderation and platform liability. In Congress, Rep. Mary Gay Scanlon (D-PA) introduced the Health and Location Data Protection Act (H.R. 9161), which would enact strict limitations on data brokers transactions involving health and location data.

In industry news, Elon Musk bought Twitter for $44 billion after months of back-and-forth and legal challenges. Following the takeover, Musk began firing Twitter executives and suggested he would loosen content moderation standards, but would first establish a “content moderation council.” Relatedly, musician Ye (formerly known as Kanye West) announced that he will acquire Parler, a social media platform popular with conservatives. The deals have sparked new fears of weaker content moderation policies that could enable more hate, misinformation, and violent extremist organizing on the platforms, especially ahead of the midterm elections. Not only do content moderation policies affect the spread of disinformation and misinformation, but online hate and threats of violence disproportionately impact women of color who are running for office, as new research from the Center for Democracy and Technology (CDT) showcased in October.

The below analysis is based on techpolicytracker.org, where we maintain a comprehensive database of legislation and other public policy proposals related to platforms, artificial intelligence, and relevant tech policy issues. Read on to learn more about October U.S. tech policy highlights from the White House, Congress, and beyond.

White House Launches Blueprint for an AI Bill of Rights

  • Summary: This month, the White House Office of Science and Technology Policy (OSTP) released its much anticipated Blueprint for an AI Bill of Rights. The landmark document comes a year after former OSTP Director Dr. Alondra Nelson’s Wired op-ed calling for “a bill of rights for an AI-powered world.” Its five core principles are: 1) Safe and Effective Systems, 2) Algorithmic Discrimination Protections, 3) Data Privacy, 4) Notice and Explanation, and 5) Human Alternatives, Consideration and Fallback. The blueprint also includes a technical companion that “provides examples and concrete steps for communities, industry, governments, and others to take in order to build these protections into policy, practice, or the technological design process.” The Blueprint is not a specific plan of action or an explicit commitment of federal effort, though it does offer a variety of general principles, opportunities, and approaches that collectively could serve as the basis of formal policy. A fact sheet released alongside the blueprint highlights efforts at various federal agencies to protect the rights of Americans in a wide range of contexts like the workplace, schools, health care, housing, and communities. Among the agency actions are a Department of Education commitment to develop recommendations by early 2023 on the use of AI in schools, forthcoming Department of Housing and Urban Development guidance on how algorithmic tenant screening tools can violate fair housing rules, and a cross-agency effort “to develop new policies and guidance for using and buying AI products and services” to avoid harmful impacts.
  • Stakeholder Response: Numerous civil rights advocates, privacy advocates, elected officials, and technologists welcomed the document as a critical first step in delivering much-needed protections for the American people. Representative Mark Takano (D-CA) described the initiative as “an early step toward establishing consumer protections for individuals subjected to the use of AI in the courtroom, the banking system, and other services throughout our daily lives.” Consumer Reportsreleased a statement calling for “more federal and state agencies implementing these recommendations”and for “Congress to codify these recommendations into law.” The U.S. Chamber of Commerce disagreed, arguing that provisions in the Blueprint could “handcuff America’s ability to compete on the global stage” due its overly broad definition of “automated systems.”
  • What We’re Reading: Janet Haven, Executive Director of Data & Society Research Institute and member of the National Artificial Intelligence Advisory Committee (NAIAC), wrote about how to interpret the White House’s Blueprint for an AI Bill of Rights. Venture Beat published an article on what the Blueprint for an AI Bill of Rights does – and does not – do. Politico interviewed Oren Etzioni, founding CEO of the Allen Institute for AI, in which Etzioni described the Bill of Rights initiative as a “principles as a stake in the ground” and a “focal point…Like when Thor hits his hammer to the earth and there’s huge reverberations.” At Lawfare, Alex Engler discussed the Blueprint’s strong progress (relative to Trump Administration guidance on AI) in contextualizing AI harms and the document’s gaps, particularly with respect to the use of AI by federal law enforcement. Finally, the Tech Policy Press’s podcast The Sunday Showcovered the context, challenges, and impact of the AI Bill of Rights.

Section 230 Heads to the Supreme Court

  • Summary: In early October, the Supreme Court agreed to hear the case Gonzalez v. Google, which may have lasting implications for the internet and content moderation through its expected review of Section 230. The Section 230 provision offers broad protections for companies that host user-generated content, as well as for the companies’ moderation (and non-moderation) choices. The Gonzalez v. Google case emerged out of the 2015 ISIS terrorist attacks in Paris, where Nohemi Gonzalez and 129 others were killed. Gonazlez’s family and estate have accused YouTube (owned by Google) of playing an active role in radicalizing ISIS terrorists through the platform’s algorithmic promotion of content to “users whose characteristics indicated they would be interested in ISIS videos.” After the case was heard by two courts, each ruled against the Gonzalez family, declaring that Section 230 protected companies’ algorithmic amplification and targeting of user-generated content. The Supreme Court review of this case could decide if companies can actually be sued for the promotion of harmful or illegal content on their platforms, potentially expanding platform liability significantly.
  • Stakeholder Response: NetChoice counsel Chris Marchese defended Section 230 on behalf of tech companies, stating that Gonzalez v. Google “show[s] the importance of content moderation,” due to the risks of an unmoderated internet, which could set the precedent for extreme and obscene content to run rampant. Representatives from the Center for Democracy & Technology and the Stanford Cyber Policy Center shared their perspectives on the case, indicating that the court’s decision could negatively impact free speech and have broader implications beyond the original case scope. On the other side, some legal scholars and technology critics continue to see Section 230 as an “enabler of corporate hypocrisy and irresponsibility,” while Justice Clarence Thomas argued back in 2020 that Section 230 should be reviewed and narrowed in scope. “The solution is simple: if a firm is monetizing content on their platform, they should be held liable for it,’” said Matt Stoller in a statement from the American Economic Liberties Project supporting efforts to narrow Section 230.
  • Reform Proposals: Recent years have seen many proposals to repeal or rewrite Section 230. Potential reform efforts include:
    • The Biden White House Principles for Enhancing Competition and Tech Platform Accountability, released last month, call for “remov[ing] special legal protections for large tech platforms” and note that “the President has long called for fundamental reforms to Section 230.”
    • Stop the Censorship Act (H.R. 8612, sponsored by Rep. Paul Gosar, R-AZ), which would eliminate companies’ liability protections for removing content that they found objectionable and instead would only provide protections for removing or moderating content that violated laws.
    • 21st Century FREE Speech Act (H.R. 7613, sponsored by Rep. Marjorie Taylor Greene, R-GA and S. 1384, sponsored by Sen. Bill Hagerty R-TN), which would repeal Section 230 altogether.
    • EARN It Act (S.3538, sponsored by Sen. Lindsey Graham, R-SC), which would remove Section 230 protections for civil and criminal liability related to child sexual abuse material.
    • You can see the full suite of legislative proposals connected to Section 230 at techpolicytracker.org
  • What We’re Reading: Vox provided an in-depth analysis of the case, the history of Section 230, and the case’s potential implications on tech companies. WIRED featured an opinion piece from Section 230 expert Jeff Kosseff that argues that the scope of Section 230 should be decided in the halls of Congress, not at the Supreme Court. GovTech recapped a Brookings Institution panel where experts discussed learnings from previous Section 230 reform efforts. Tech Policy Press spoke to Anupam Chander, a Professor of Law and Technology at Georgetown University and experts from the CITRIS Policy Lab, the Knight First Amendment Institute, and the Washington Post about the case's potential impacts. The Verge also covered a related case that the Supreme Court will rule on this term, Twitter v. Taamneh, which also has potentially significant implications for content moderation law. The US Court of Appeals for the Ninth Circuit found that social media companies could potentially be liable for “knowingly” aiding and abetting acts of terrorism by providing “generic, widely available services” if it could have taken even more aggressive enforcement actions to prevent their use by terrorists. Twitter appealed the court’s decision, which had declined to consider Section 230. It is unclear if Elon Musk’s recent acquisition of Twitter will change the company’s strategy in this case.

Biden’s Privacy Shield 2.0 Aims to Fill EU-U.S. Data Privacy Gaps

  • Summary: In early October, President Biden announced the government’s plan for the implementation of the U.S.’s commitments under the European Union-U.S. Data Privacy Framework through an Executive Order on Enhancing Safeguards for United States Signals Intelligence Activities. The Executive Order further limits when and for what purpose the U.S. government can conduct signals intelligence activities and creates a multi-layered system for individuals in qualifying areas to seek redress if their personal data was collected illegally. The framework fills a gap in data protections left by a European court’s undoing of a previous system of data protection in 2020, itself a result of years of litigation and policy development related to United States intelligence collection. Following this executive action, the European Commission can proceed with developing a draft adequacy decision and adoption procedure. Until this framework is officially adopted, businesses who utilize EU-U.S. data transfers to authorize their data risk treading in legally uncertain territory. This framework potentially still has a long journey ahead before the European Commission can adopt a final adequacy decision.
  • Stakeholder Response: The Executive Order, also referred to as the Privacy Shield 2.0, provoked a mixed bag of responses. Industry actors seemed to favor the move: TechNet applauded the executive order, and Nick Clegg, Meta’s president of global affairs, tweeted in favor of the new policy. Some data privacy watchdogs, however, critiqued the E.O. as weak and insufficient: BEUC, a European consumer group, and Max Schrems, an EU privacy advocate and plaintiff in previous litigation objecting to inadequate American privacy laws, argued that the new framework, like its predecessor, continues to fail at addressing commercial uses of personal data. In the United States, the ACLU stated that “although the executive order is a step in the right direction, it does not meet basic legal requirements in the EU, leaving EU-U.S. data transfers in jeopardy going forward.” The International Center for Law and Economics’ Senior Scholar Mikołaj Barczentewicz emphasized the urgency of a speedy final adequacy decision, noting that EU citizens could lose access to key online services if a privacy shield agreement is not reached.
  • What We’re Reading and Listening To: Politico highlighted how various international stakeholders responded to the executive order. At Lawfare, Paul Rosenzweig contextualized the executive order with a larger analysis of the Biden Administration’s strategy on signals intelligence. The International Association of Privacy Professionals provided an in-depth analysis of the executive order and new Department of Justice regulations as well as the powers of the European Data Protection Board.

New Legislation and Policy Updates

  • Health and Location Data Protection Act(sponsored by Rep. Mary Gay Scanlon, D-PA): This month, the Health and Location Data Protection Act (H.R. 9161), was introduced in the House by Rep. Mary Gay Scanlon (D-PA) alongside Reps. Anna Eshoo (D-CA), Pramila Jayapal (D-WA), and five others. The bill, a companion to S.4408 (sponsored by Sen. Elizabeth Warren, D-MA) would ban data brokers from selling, reselling, licensing, trading, transferring, or sharing health and location data. It would also provide the Federal Trade Commission (FTC) with $1 billion for enforcement. The bill has been referred to the House Committee on Energy and Commerce, with no schedule for a markup announced.

Public Opinion Spotlight

According to the 2022 Edelman Trust Barometer, Americans are increasingly thinking about hardware and software companies and social media companies as one entity, and trusting tech companies less. The study, based on over 36,000 online interviews between November 1-14, 2021, found that:

  • “Trust in tech overall has declined 24 points during the past decade in the U.S., losing ground across all demographics.
  • Republicans are 16 percent less trusting of tech than Democrats.
  • People with high incomes are more likely to trust the sector than those with low incomes
  • In developed countries like the U.S., the majority of people trust neither governments to regulate technology nor the large platforms to regulate themselves.
  • The majority of people in developed countries still believe that technology can play a role in solving urgent societal needs, including healthcare access, mitigating climate change and increased economic competitiveness.”

YouGov polled 1,000 U.S. adult citizens on October 12-14, 2022 about their views on content moderation following the suspension of Kanye West from Twitter and Instagram after West posted antisemitic comments. The poll found that:

  • “Nearly three in four Americans (72 percent) believe the companies have a responsibility to prevent harassment, including large majorities of Democrats (76 percent) and Republicans (66 percent).
  • Two-thirds of Americans (67 percent) think users should be prevented from posting hate speech or racist content; on this too, most Democrats (80 percent) and Republicans (58 percent) agree.
  • Most Americans (62 percent) also think social media sites have a responsibility to prevent the spread of conspiracy theories or false information. Eight in 10 Democrats (79 percent) think companies should prevent the spread of conspiracy theories or false information, compared to only 49 percent of Republicans.
  • Majorities of Americans agree that companies should suspend accounts posting content that falls into each of the five categories asked about, including violent content (77 percent say accounts posting it should be suspended), content that promotes racial division (75 percent), antisemitic content (74 percent), hate speech (73 percent), and disinformation (65 percent).
  • One reason why Republicans may be less supportive of online content regulation than Democrats is that they are far more likely to believe that social media companies are biased in how they apply rules related to fact checking and censorship. Only 16 percent of Republicans say social media sites fairly apply rules in these areas, while 69 percent say they are biased. Democrats are more divided: 34 percent say they are fair and 28 percent say they are biased.”

A Public Policy Polling poll commissioned by American Family Voices, an advocacy group, was released in October, surveying 676 swing state voters from September 27-28, 2022. The poll investigated voter opinion on regulating big tech and found that:

  • “77 percent of voters agree that Big Tech corporations like Facebook, Google and Amazon have grown too big, too powerful and too invasive and use their monopoly power to price gouge, to collect personal information on ordinary Americans, and make billions invading consumer privacy.
  • 44 percent of voters say the government has a lot of responsibility in regulating big tech corporations while 35 percent of voter say the government only has some responsibility in regulating these companies.
  • 37 percent of voters say if their member of Congress or U.S. Senator voted to impose greater regulations on Big Tech corporations like Amazon, Facebook, and Google, to reduce their power and prevent monopolies they would be much more likely to vote for that person, 28 percent say they would be somewhat more likely to vote for that person, and 19 percent say it would make no difference.
  • 70 percent of voters support addressing legal loopholes that give Big Tech corporations unfair advantages that preserve and expand their power.
  • 75 percent of voters support updating anti-monopoly laws to reign in the monopoly power of Big Tech corporations like Amazon, Facebook and Google."

- - -

We welcome feedback on how this roundup and the underlying tracker could be most helpful in your work – please contact Alex Hart and Kennedy Patlan with your thoughts.

Authors

Kennedy Patlan
Kennedy Patlan is a Project Manager at Freedman Consulting, LLC, where she assists with strategic development, project management, and research. Her work covers technology policy, health advocacy, and public-private partnerships.
Alex Kennedy
Alex Kennedy is a Senior Associate at Freedman Consulting, LLC. She supports project teams through strategic planning, research, and project management, and her work focuses on technology and civil rights, with a particular emphasis on emerging technology.
Rachel Lau
Rachel Lau is a Senior Associate at Freedman Consulting, LLC, where she assists project teams with research, strategic planning, and communications efforts. Her projects cover a range of issue areas, including technology policy, criminal justice reform, economic development, and diversity and equity...

Topics