July 2023 U.S. Tech Policy Roundup
Kennedy Patlan, Rachel Lau, J.J. Tolentino / Aug 1, 2023Rachel Lau, Kennedy Patlan, and J.J. Tolentino work with leading public interest foundations and nonprofits on technology policy issues at Freedman Consulting, LLC. Alison Talty, a Freedman Consulting Phillip Bevington policy & research intern, also contributed to this article.
In July, all eyes were on Congress as lawmakers in both chambers attempted to advance the annual National Defense Authorization Act before the August recess. Policymakers aimed to ensure that proposals on artificial intelligence, social media oversight, and other tech-related policy were included in the package.
This month, the Department of Commerce added spyware vendors Cytrox and Intellexa to a trade blacklist due to national security risks. The move followed President Joe Biden’s executive order on commercial spyware restrictions this spring. The Biden administration also received good news from the 5th Circuit Court of Appeals, which granted a temporary stay of an injunction that restricted the Biden administration from communicating with social media companies about “protected free speech.” In seeking the stay, the Justice Department expressed concern that the ban on contacting platforms could threaten national security. The July 4 order had already begun impacting the administration’s relationships with the platforms, including the administration’s efforts to thwart election inference, voter suppression, and disinformation.
At the agency level, Federal Communications Commissioner nominees Anna Gomez (D), Geoffrey Starks (D), and Brendan Carr (R) were approved by the Senate Commerce Committee. In July, the Senate Commerce Committee also received a briefing on AI by Federal Trade Commission (FTC) officials, laying the groundwork for potential policy development on AI to follow. In July, the FTC lost an appeal in its case against Microsoft’s acquisition of Activision Blizzard. The acquisition continued to move quickly through an extended merger agreement period that is set to conclude in October. The FTC also chose to pause its in-house trial related to the merger.
In one of July’s more memorable moments, Elon Musk decided to change Twitter’s name to “X.” Meanwhile, Meta released its newest app and X competitor, Threads, which quickly rose to over 100 million users, even though active users declined by half. Also in July, the House Judiciary Committee determined that Threads will become part of the committee’s ongoing investigations into tech platform policies and related communications with the Biden administration. The committee called off a potential contempt vote against Meta CEO Mark Zuckerberg at the last minute, after the company turned over additional documents. Meta was also under scrutiny in July as tech policy advocates and researchers urged the FTC to investigate whether Meta has been rejecting sexual-health advertisements for women.
In the field of AI, Google DeepMind co-founder Mustafa Suleyman welcomed more regulation of AI and highlighted the potential dangers of AI. Hollywood actors raised concerns about AI use in film and television as they launched a strike, and a panel of officials and creators representing the art and entertainment industries participated in a Senate hearing on AI and copyright. The Securities and Exchange Commission (SEC) proposed rules requiring “broker-dealers to address conflicts of interest in the use of artificial intelligence in trading.”
The below analysis is based on techpolicytracker.org, where we maintain a comprehensive database of legislation and other public policy proposals related to platforms, artificial intelligence, and relevant tech policy issues.
Read on to learn more about July U.S. tech policy highlights regarding lawmakers ongoing discussions regarding AI, Lina Khan’s congressional testimony, and emerging legislation regarding the government’s use and access to data.
Companies Make AI Safety Commitments; Biden Administration and Majority Leader Schumer Convene AI Discussions
- Summary: The Biden Administration secured a voluntary commitment from leading AI companies to improve the safety, security, and transparency of AI technology development. The seven industry leaders, including Microsoft, OpenAI, Google, Meta, Amazon, Anthropic, and Inflection AI, have agreed to allow security experts to test their products before their release and commit to share information about the safety of their systems with external stakeholders. The companies also pledged to use mechanisms to make it clear when content is a product of generative AI. President Biden noted that the administration is developing an AI executive order and is working towards bipartisan legislation.
- Public Interest Leaders Discuss AI Risks with VP Harris: Vice President Kamala Harris met with consumer protection, labor, and civil rights leaders to discuss AI related risks and to reaffirm the Biden administration’s commitment to protecting the Americans from harm and discrimination. At the meeting with the Vice President, civil society leaders shared real world examples of ongoing risks posed to vulnerable populations and emphasized the need to ensure AI policy is grounded in the rights of workers, consumers, and impacted communities. Vice President Harris rejected the “false choice” that suggests the U.S. cannot both advance innovation and protect consumers. Vice President Harris also argued that we “should not dampen” AI innovation that can improve people's lives, and should take steps to mitigate “current and potential risks such as threats to safety, privacy, and those that deepen existing inequality and discrimination.”
- Sen. Schumer Prioritizes AI Policy: Senate Majority Leader Chuck Schumer (D-NY) sent a letter to his fellow colleagues laying out plans to continue to build on the SAFE Innovation Framework for AI. The letter also mentioned that on July 11th, senators received a first-ever classified briefing with the Department of Defense and Intelligence community to learn about AI’s use in national security. Sen. Schumer also announced that the Senate will be hosting top AI experts in a nine-part series of “AI Insight Forums” in the coming months. Topics include copyright, workforce issues, national security, high risk AI models, existential risks, privacy, transparency, and election and democracy. Sen. Schumer acknowledged that Congress is still far from bipartisan consensus on AI regulations, but wants his colleagues to get up to speed on the complexities of the new technology.
- Stakeholder Response: Maya Wiley, CEO of the Leadership Conference on Civil and Human Rights, and Alexandra Reeve Givens, President and CEO of the Center for Democracy and Technology, praised Vice President Harris for prioritizing AI risks and threats and voiced a desire to see “meaningful regulation [and] implementation of some of the really important work this administration has done.” Other civil society groups present at the meeting included Upturn, AARP, Encode Justice, AFL-CIO, and the Cyber Civil Rights Initiative, among others. Senators’ reactions to the classified AI briefing varied: Sen. Chris Coons (D-DE) noted that the briefing generated a greater sense of urgency to address AI related national security concerns, while Sen. Marco Rubio (R-FL) noted that regulation will be difficult. In a Wired article, Suresh Venkatasubramanian, director of the Center for Tech Responsibility at Brown University and an author of the White House’s Blueprint for an AI Bill of Rights, stated that the Biden Administration already has the framework needed to develop an effective executive order to make AI safer. Leading AI companies Anthropic, Google, Microsoft, and OpenAI announced the Frontier Model Forum, a new partnership drawing from the technical and operational expertise of its members to promote safe and responsible development of cutting edge AI models.
- What We’re Reading: Cyberscoop examined the FTC’s ability to utilize “algorithmic disgorgement” to require companies to delete products built on improperly obtained data from algorithmic models. The New York Times reported that Google is testing Genesis, an AI technology that can take information and generate news content. Dr. Safiya Noble, a Professor of Gender Studies and African American Studies at UCLA and Co-Director of the Minderoo Initiative on Tech & Power at the UCLA Center for Critical Internet Inquiry, joined NPR’s All Things Considered to discuss how AI can be used to perpetuate racist systems. At Tech Policy Press, Hanlin Liexamined the tension between data scraping practices used to train AI technologies and a lack of data stewardship for users and content creators. Dr. Fallon S. Wilson founded the #BlackTechFutures Research Institute housed at Stillman College to support marginalized communities struggling to “navigate an AI-enabled world.” The Federation of American Scientists launched a NDAA AI Tracker, to monitor AI related provisions that have been included in the National Defense Authorization Act.
Khan Defends the FTC at House Judiciary Hearing; FTC Releases Updated Merger Guidelines
- Summary: On July 13, FTC Chair Lina Khan testified before the House Judiciary Committee, defending the agency’s work to promote competition and protect consumers. Khan highlighted the agency’s pro-competition enforcement actions, such as the agency’s antitrust suit against Microsoft’s purchase of Activision. Protecting consumer privacy was also a key topic of her testimony, as she described actions being taken to protect against consumer deception and to secure data privacy. The hearing came after months of pushback from Republicans, inside and outside of Congress, including the resignation of the FTC’s two Republican commissioners. This month, however, Biden nominated two new Republican commissioners, Andrew Ferguson and Melissa Holyoak, to replace those who resigned. The FTC’s investigations into Twitter’s privacy practices, proposals of noncompete bans, and attacks on junk fees have also drawn criticism from many prominent Congressional Republicans.
- Despite the backlash Khan and the FTC faced during the hearing, the agency continued to push forward with antitrust actions this month. On July 18, the FTC and the Department of Justice released a draft of new merger guidelines. The guidelines highlight controversial, potentially anti-competitive practices including “killer acquisitions,” in which a company acquires a rival in an attempt to prevent future competition. The FTC highlighted that these acquisitions can also harm competition by giving one company exclusive access to a large amount of data to further strengthen their market position.
- Stakeholder Response: House Republicans led the charge against Khan’s FTC leadership during the hearing, with House Judiciary Committee Chairman Jim Jordan (R-OH) asserting that, under Khan, the FTC has exhibited an unchecked, radical amount of power over American business. Other House Republicans accused Khan of harassing businesses, taking politically motivated actions, poorly managing the FTC, and misleading Congress. Khan defended the agency’s antitrust agenda and many House Democrats voiced support for her. During the hearing, Rep. Jerry Nadler (D-NY) stated his support for the FTC’s decision to investigate Twitter, and Rep. Hank Johnson (D-GA) said that many of the attacks on Khan were based on her ethnicity. Some Republican members also voiced support for Khan, with Rep. Ken Buck (R-CO) pointing out the potential hypocrisy of Congress’s accusations of conflicts of interest. The White House also affirmed its support for Khan, stating that she had delivered results for people all across the country.
- What We’re Reading: Before the hearing, Politico examined likely Republican criticisms of Khan and noted many of her supporters’ comments. In The Hill, Taylor Giorno recapped the hearing and comments of key House Republicans and Democrats; Cecilia Kang did the same in the New York Times. In Politico, Rebecca Kern and Josh Sisco examined how “partisan bickering” and “political theatrics” took over the hearing, overshadowing much of Khan’s testimony. As the appropriations process continues, riders relating to antitrust have drawn concern from consumer advocacy groups, as discussed in the Washington Post. On the industry side, Khan and the FTC faced legal action from Twitter, as the company has requested that a federal court let them out of their 2022 FTC privacy settlement. Finally, the FTC also recently opened an investigation into OpenAI’s use of personal data and other matters, sparking criticism from OpenAI co-founder Sam Altman.
Congress Advances Limits on Government Commercial Data Purchases
- Summary: In an effort to limit government surveillance and data collection, the House Judiciary Committee advanced the Fourth Amendment is Not For Sale Act (FANFSA, H.R. 4639) unanimously this month. FANFSA, introduced by Rep. Warren Davidson (R-OH), would prohibit government agencies from buying “subscriber or customer records” without first obtaining a warrant. FANFSA specifically names location data as covered under the bill and was co-sponsored by four Democrats and three Republicans. Sen. Ron Wyden (D-OR), who introduced the FANFSA in the last Congress, expressed his support for the bill, but also pressed for comprehensive surveillance reform. Sens. Ron Wyden (D-OR) and Rand Paul (R-KY) re-introduced the Senate companion bill at the end of the month with four Democrats and one Republican also co-sponsoring.
- The Section 702 reauthorization deadline continues to loom, as FBI Director Christopher Wray testified at the House Judiciary Committee this month, arguing that while the FBI collects significant communications data from Americans and foreigners under Section 702, the FBI only accesses about three percent of it. House Republicans pushed back against that characterization, highlighting that hundreds of thousands of searches of American data still occur. Additionally, a report by the President’s Intelligence Advisory Board urgently called for Section 702’s reauthorization, arguing that the authorities are crucial to national security and that bureau officials should take steps to improve the execution of the 702 database while maintaining its use.
- Stakeholder Response: FANFSA’s passage out of committee has prompted support from civil society groups. Senior Policy Counsel Sean Vitka at Demand Progress applauded the movement, calling it a “major step forward for privacy in the digital age.” Nora Benavidez, Senior Counsel and Director of Digital Justice & Civil Rights at Free Press tweeted in support of FANFSA. The Center for Democracy & Technology also endorsed the bill.
- Elected officials and civil society organizations have continued to speak out on Section 702. Rep. Jason Crow (D-CO), member of the House Intelligence Committee’s Section 702 working group, stated earlier in the month that “there is not going to be a clean reauthorization,” explaining that reforms like warrant requirements and limits on when databases could be accessed would be needed in any reauthorization effort. Additionally, the ACLU and Electronic Frontier Foundation continued to push for major reforms to or the abolition of Section 702.
- What We’re Reading: Wired examined FANFSA’s political support as well as the constitutional precedent behind the bill. CNN reported on the House Republicans’ broader criticisms voiced against the FBI during Wray’s testimony. NPR covered the testimony as well, reporting on Democrats’ and Republicans’ shared concerns over the FBI’s investigations.
Other New Legislation and Policy Updates
The following bills made progress in Congress in July:
- Kids Online Safety Act (KOSA, S. 1409, sponsored by Sen. Richard Blumenthal (D-CT) and Sen. Marsha Blackburn (R-TN)): KOSA, re-introduced in this Congress in May, was approved by the Senate Commerce Committee by voice vote this month. KOSA bans kids under 13 years old from social media, requires companies to obtain parental consent for users under 17 years old to use their platform, and establishes a range of requirements for platforms to protect kids online through a “duty of care.” The committee approved amendments to language requiring user age verification by platforms and adding “filter bubble transparency requirements,” mandating greater transparency by platforms.
- KOSA has continued to divide civil society organizations and industry groups, with some organizations like Fight for the Future, Center for Democracy & Technology, Electronic Frontier Foundation, and the Computer & Communications Industry Association in opposition to the bill and the American Psychological Association and Common Sense Media in support.
- Children and Teens’ Online Privacy Protection Act (COPPA 2.0, S. 1418, sponsored by Sens. Ed Markey (D-MA) and Bill Cassidy (R-LA)): COPPA 2.0 was approved by the Senate Commerce Committee by voice vote this month. The bill would raise the age of protection from 13 to 16 years old under the Children’s Online Privacy Protection Act, establishing more protection for users between 13 to 16. The committee amendment process removed a provision creating a Youth Marketing and Privacy Division at the Federal Trade Commission.
- AI Accountability Act (H.R.3369, sponsored by Reps. Josh Harer (D-CA) and Robin Kelly (D-IL)): The House Energy and Commerce Committee unanimously approved the amended AI Accountability Act. The amended bill would direct the National Telecommunications and Information Administration to conduct a study on accountability measures for AI systems used in communications networks as well as defining “trustworthiness” in AI related contexts. The bill also directs the National Telecommunications and Information Administration (NTIA) Assistant Secretary to hold public meetings with relevant stakeholders for feedback on how information should be made available to consumers who interact with AI systems.
- Privacy Enhancing Technology Research Act (H.R. 4755, sponsored by Reps. Haley Stevens (D-MI) and Thomas Kean (R-NJ)): This bill calls for governmental coordination on data privacy practices and authorizes standards establishment, workforce development, and research on privacy enhancing technology at the National Science Foundation and National Institute of Standards and Technology. The bill was introduced this month, and the House Science, Space and Technology Committee unanimously reported the bill to the House.
- Informing Consumers about Smart Devices Act (S. 90, sponsored by Sens. Ted Cruz (R-TX) and Maria Cantwell (D-WA)): This bill would require manufacturers to disclose when internet-connected devices contain cameras or microphones. In February, the House passed a companion bill (H.R. 538) and this month, the Senate passed S. 90 as part of the National Defense Authorization Act.
The following AI-related bills were introduced in July:
- Creating Resources for Every American To Experiment with Artificial Intelligence Act of 2023 (CREATE AI) Act (H.R.5077, sponsored by Reps. Anna G. Eshoo (D-CA), Ken Buck (R-CO), and Ted Lieu (D-CA)):The CREATE AI Act would establish the National Artificial Intelligence Research Resource (NAIRR) as a shared national research infrastructure for AI researchers and students to access resources, data, and tools to develop safe and trustworthy AI. NAIRR aims to democratize AI research, ensuring that everyone, not just large tech corporations, is able to access computational resources, data, educational tools, and AI testbeds and contribute to AI research and development. A companion bill (S.2714) was introduced to the Senate by Sens. Martin Heinrich (D-NM), Todd Young (R-IN), Cory Booker (D-NJ), and Mike Rounds (R-SD).
- Artificial Intelligence and Biosecurity Risk Assessment Act (H.R.4704, sponsored by Reps. Anna Eshoo (D-CA) and Dan Crenshaw (R-TX)): The Artificial Intelligence and Biosecurity Risk Assessment Act would amend the Public Health Service Act to require the Assistant Secretary for Preparedness and Response to assess whether AI advancements could pose a public health risk through the development of novel pathogens, viruses, and biological or chemical weapons.
- AI Training Expansion Act of 2023 (H.R.4503, sponsored by Reps. Nancy Mace (R-SC) and Gerald Connolly (D-VA)): The AI Training Expansion Act of 2023 aims to amend the Artificial Intelligence Training for the Acquisition Workforce Act to create training requirements. This bill would expand AI training requirements in the federal executive branch to require acquisition employees, management officials, supervisors, and data/technology employees to undergo AI training. AI training would include basics on AI definitions and use, introductory technological concepts, AI's potential benefits and risks, data's role in AI models, risk mitigation strategies, and considerations for executive agencies to take into account when using AI.
- AI LEAD Act (S.2293, sponsored by Sen. Gary Peters (D-MI)): This bill would direct the Director of the Office of Management and Budget to establish a Chief Artificial Intelligence Officers Council. The Director of the Office of Management and Budget will serve as the Chair of the Council. A Chief Artificial Intelligence Officer will be designated from each agency and each agency will also have to form an Artificial Intelligence Governance Board to advise on AI issues.
- Algorithmic Justice and Online Platform Transparency Act (H.R.4624 / S.__, sponsored by Rep. Doris Matsui (D-CA) and Sen. Edward Markey (D-MA)): This bill requires platforms to disclose the types of personal information used by their algorithms and how that information is used. It also requires platforms to disclose their content moderation practices and would form an interagency task force on algorithmic processes on online platforms to investigate discriminatory algorithmic processes.
The following other technology policy bills were also introduced this month:
- Digital Consumer Protection Commission Act of 2023 (sponsored by Sens. Elizabeth Warren (D-MA) and Lindsey Graham (R-SC)): This bill would create a new federal commission “to regulate digital platforms, including with respect to competition, transparency, privacy, and national security.” The commission would deal with issues related to AI, data privacy, powerful market positions, and the social harms of technology, like child abuse and cyberbullying. Sens. Warren and Graham published an op-ed in The New York Times on the bill, pushing to “rein in Big Tech.”
- Terms-of-service Labeling, Design, and Readability (TLDR) Act (H.R. 4568, sponsored by Rep. Lori Trahan (D-MA)): The TLDR Act would require websites and apps to provide users an easily understandable summary of their terms-of-service agreements. Entities would also need to produce and share with their users a “graphic data flow diagram” of their data use. This act is an attempt to increase transparency surrounding usage of sensitive personal data. The synopses must include summaries of the data processed, the purpose and storage of the data, and directions on how users can delete and stop sharing data, among other information. Former Rep. Trahan aide Anna Lenhart wrote about the merits of the TLDR Act in Tech Policy Press.
- Purchased Data Inventory Act (S.2292, sponsored by Sen. Gary Peters (D-MI)): The Purchased Data Inventory Act will require the Chief Data Officer of each agency to compile and submit a report on the agency's “covered data purchases.” “Covered data” includes any data or information that could be used to identify an individual. Each purchase in the report must include a description of its purpose and contents, justification of the purchase, identity of the vendor, the number of individuals who could be identified by the purchase, and the type of data included.
- Online Consumer Protection Act (H.R.4887, sponsored by Reps. Jan Schakowsky (D-IL) and Kathy Castor (D-FL)): The Online Consumer Protection Act would require that social media platforms and online marketplaces provide clearly written terms-of-services to their users. These entities would also be required to create a Consumer Protection Program, which would ensure the entity is following consumer protection laws and would implement consumer safety standards. The act gives the FTC authorization to both implement these regulations and enforce them with civil penalties.
- Communications, Video, and Technology Accessibility Act of 2023 (S.2494, sponsored by Sen. Ed Markey (D-MA)): This bill strengthens the accessibility regulations put forth in the 21st Century Communications and Video Accessibility Act of 2010. Standards for closed-captioning and audio description in television and online video platforms would be improved, and entities would have to ensure that users are easily able to choose closed-captioning settings on their devices. Video conferencing platforms would also have to update and improve their accessibility features. Similarly, 911 services would be required to provide equitable access for people with disabilities. This act grants the FTC authority to continue to update accessibility regulations as new technologies continue to emerge.
- Deceptive Experiences To Online Users Reduction (DETOUR) Act (S.2708, sponsored by Sens. Mark Warner (D-VA), Deb Fischer (R-NE), Amy Klobuchar (D-MN), and John Thune (R-SD)): This bill stipulates that entities that run large online platforms would be banned from using “dark patterns” in their platforms, which can manipulate users into giving their personal information. Specifically, they would be prohibited from designing interfaces in a way that impairs user decision making or consent to give up their personal data. It also disallows the practice of subdividing users to conduct behavioral experiments without first obtaining consent. Finally, there is a provision to protect child users, prohibiting platform design that is meant to “create compulsive usage among children and teens under the age of 17.”
- Free Speech Protection Act (H.R.4791 / S.2425, sponsored by Rep. Jim Jordan (R-OH) and Sen. Rand Paul (R-KY)): The Free Speech Protection Act would ban federal employees from directing online platform providers to censor speech protected under the First Amendment. The bill would also require federal agencies to report their communications with providers and prohibit federal funding to grantees related to misinformation or disinformation. Finally, the Disinformation Governance Board would be terminated by statute.
- Privacy Enhancing Technology Research Act (H.R.4755, sponsored by Reps. Haley Stevens (D-MI) and Thomas Kean (R-NJ)): The Privacy Enhancing Technology Research Act directs the National Science Foundation to support research on privacy-enhancing technology, including research into anonymization technologies, protective algorithms, and data minimization in data collection. The NSF would also support research awards made alongside other federal agencies into education and workforce training.
Public Opinion Spotlight
From July 6-7, 2023 Ipsos surveyed 1,004 U.S. adults regarding their opinions on Threads, Meta's new social media platform. Their findings include:
- 34 percent of all respondents self-reported as having already tried Threads or being “very" or “somewhat” likely to try the platform within a few weeks. Of respondents who already have a Twitter account, 58 percent say they have already tried Threads or are "very" or "somewhat" likely to try the platform soon. 51 percent of respondents who already have an Instagram account say the same.
- 46 percent of respondents who already have a Twitter account said they will likely use, or already have started using, Threads for the activities they used to use Twitter for.
- Some of this shift may have to do with public opinion of Twitter. 40 percent of respondents say Twitter is “dominated by extreme and unpleasant people,” compared to 8 for Instagram, 25 for TikTok, and 26 percent for Facebook.
Morning Consult conducted a survey from July 10-12, 2023 of 948 Twitter users, 278 of whom were Threads and Twitter users. They were asked questions about Twitter and its new rival platform, Threads. They found that:
- From April 2023 to June 2023, Twitter's net favorability among users dropped from 49 percent favorability to 42 percent favorability. Instagram, in comparison, had a net favorability among users of 71 percent in June 2023.
- Threads' aim of being more positive and less controversial seems to be resonating with users. More users are gaining a “positive” rather than “negative” impression of the new platform as a “more positive” and “less political” version of Twitter, as well as its more stringent content moderation than Twitter's. On the other hand, more users are gaining a “negative” rather than “positive” impression of the new platform from Threads' requirement to have an Instagram account to make a Threads account and users' inability to delete their Threads account without also deleting their Instagram account.
- Of users who use both Threads and Twitter, 23% say Threads is what they use primarily of the two. 38% use the two platforms equally.
Pew Research Center conducted a survey of 5,101 U.S. adults from May 15-21, 2023 regarding TikTok and the risks it poses to national security. Key findings include:
- Of the U.S. adults surveyed, 17 percent do not see TikTok as a threat to national security, while 59 percent see TikTok as either a major or a minor threat to national security. 23 percent of respondents are not sure if it is a threat to national security or not.
- While 70 percent of self-identified Republicans believe TikTok is a national security threat, only 53 percent of self-identified Democrats do.
- There are also major differences in opinion across age ranges. Among 18- to 29-year-olds, 13 percent see TikTok as a national security threat. Among adults over 65 years old, 46 percent see TikTok as a national security threat.
- 42 percent of TikTok users believe it is a national security threat compared to 65 percent of respondents who do not use TikTok.
- 64 percent of respondents report being concerned about TikTok's data handling practices, while 34 percent report being unconcerned.
Between May 15-21, 2023, Pew Research Center surveyed 5,101 U.S. adults regarding Black Lives Matter and other more general forms of interaction with political and social issues through social media in honor of the tenth anniversary of the hashtag's first use. Pew also analyzed public tweets from July 2013 through March 2023; this data collection and analysis was conducted between March 1 and May 12, 2023. Key findings include:
- The daily use of the #BlackLivesMatter hashtag on Twitter peaked on May 25, 2020, the day George Floyd was killed by a police officer in Minneapolis. It continued to be heavily used throughout the summer of 2020, with tweets from May 2020 to September 2020 making up over half of all tweets that use the #BlackLivesMatter hashtag.
- 72 percent of the tweets under the #BlackLivesMatter hashtag are in support of the movement.
- One-third of the tweets under the #BlackLivesMatter hashtag made mention of police and/or police violence.
- Around one-third of all tweets made under the #BlackLivesMatter hashtag between July 2013 and March 2023 are not currently accessible on Twitter.
- Of the respondents who use social media, 77 percent have seen Black Lives Matter related content on social media websites and apps.
- 24 percent of the respondents said they had ever posted something in support of Black Lives Matter on a social media website. Of Black social media users, 52 percent had ever posted in support of Black Lives Matter, while only one in five each of Hispanic, Asian and white social media users had ever posted in support of Black Lives Matter.
- 10 percent of the respondents said they had ever posted something in opposition of Black Lives Matter on a social media website.
- Respondents were also asked how effective social media and news media were at bringing attention to the issue of Black Lives Matter. 43 percent of respondents said that social media is “extremely” or “very” effective at bringing attention to the issue of Black Lives Matter, while 32 percent said the same of news organizations.
Ipsos conducted a 31-country survey between May 26, 2023 to June 9, 2023 of 22,816 adults ages 16 and up about artificial intelligence. Key findings include:
- Across all 31 countries, on average:
- 52 percent of respondents say that products and services that use AI make them nervous while 54 percent of respondents say the same AI products and services make them excited.
- 67 percent of the respondents report having a “good understanding of what AI is” and 51 percent know the products and services that use AI.
- Across all 31 countries on average, 57 percent believe AI will change their current job functions and 36 percent believe it will replace their job.
- Only 49 percent of all the respondents reported that AI products and services changed their lives in significant ways in the last 3 to 5 years. This percentage is the same as in December 2021.
We welcome feedback on how this roundup and the underlying tracker could be most helpful in your work – please contact Alex Hart and Kennedy Patlan with your thoughts.