Home

Donate

April 2024 US Tech Policy Roundup

Rachel Lau, J.J. Tolentino / Apr 30, 2024

Rachel Lau and J.J. Tolentino work with leading public interest foundations and nonprofits on technology policy issues at Freedman Consulting, LLC. Rohan Tapiawala, a Freedman Consulting Phillip Bevington policy & research intern, also contributed to this article.

The US Capitol in Washington, DC.

April 2024 saw movement on key tech issues on Capitol Hill and in federal agencies:

  • Congress passed a bill that requires ByteDance, TikTok’s China-based owner, to sell the app to a US-based owner in nine months or risk the app’s removal from all US-based app stores. The bill follows over a year’s worth of congressional efforts to ban TikTok. The bill will face a number of challenges, including a likely First Amendment challenge in court by ByteDance. The sale requirement and prospective buyers might also face complications given the app’s anticipated selling price due to its popularity, concerns around algorithmic source codes and licensing, complications with separating US services from global operations, and potential antitrust challenges.
  • The latest in ongoing efforts, House representatives introduced the Kids Online Safety Act (KOSA) and the Children and Teens’ Online Privacy Protection Act (COPPA 2.0, H.R.7890), companion bills to KOSA (S.1409) and COPPA 2.0 (S.1418) in the Senate. Although there is no set date for a committee review or House vote on either bill, now that House bills have been introduced, kids online safety remains an active debate in Congress. The journey for these two bills started nearly a year ago with introductions in the Senate. In July 2023, the Senate Committee on Commerce, Science, and Transportation marked up both bills. In February 2024, Richard Blumenthal (D-CT) and Marsha Blackburn (R-TN) unveiled a new version of KOSA with 68 Senate co-sponsors at the time of publishing.
  • On April 7, House Committee on Energy and Commerce Chair Cathy McMorris Rodgers (R-WA) and Senate Committee on Commerce, Science and Transportation Chair Maria Cantwell (D-WA) unveiled a discussion draft of the American Privacy Rights Act of 2024 (APRA), the latest attempt at comprehensive data privacy legislation.
  • Last week, the Federal Communications Commission (FCC) reinstated net neutrality, and Congress took action on Foreign Surveillance Intelligence Act (FISA) Section 702 reauthorization.

Read on to learn more about April developments on the revival of the net neutrality debate, Section 702 reauthorization, data privacy bills in Congress, and more.

FCC Votes to Reinstate Net Neutrality, Reigniting Open Internet Debates

  • Summary: In a 3-2 vote along party lines, the FCC voted to revive net neutrality this month, reclassifying broadband as a telecom service subject to important consumer protections. By restoring the Obama-era rules, the FCC will once again have the ability to prevent broadband providers from blocking or throttling internet traffic or speeding up access to business partners, requiring providers to service all traffic equally. FCC Chair Jessica Rosenworcel suggested that this ruling reflects the importance of having access to high-speed internet for US citizens, stating that “every consumer deserves internet access that is fast, open and fair.” In a rebuttal, FCC Commissioner Brendan Carr said the agency’s decision to classify internet access as a public utility is “unlawful and misguided.” The telecommunications industry opposed the revival of net neutrality and is expected to sue to overturn the FCC’s ruling.
  • Stakeholder Response: The FCC’s ruling to reinstate net neutrality has reignited ongoing debates about open internet and government regulation. The Computer and Communications Industry Association, whose members include Amazon, Apple, and Meta, supported the FCC’s ruling, stating that net neutrality will “preserve open access to the internet.” The Electronic Frontier Foundation also published a piece in favor of net neutrality as a protector of consumer freedom on the internet, offered additional ideas to tighten potential loopholes in the ruling, and addressed concerns about preemption.
  • In response to FCC’s release of draft rules earlier this month, dozens of Republican lawmakers sent a letter to FCC Chair Rosenworcel opposing the reclassification of broadband access as a public utility, warning that doing so would impede the competitiveness of the telecommunications industry. USTelecom, a trade association that represents telecommunications-related businesses including AT&T, Verizon, and others, released a statement from their President and CEO Jonathan Spalter, calling net neutrality a “nonissue for broadband consumers, who have enjoyed an open internet for decades.” Spalter also suggested that the ruling is “harmful regulatory land grab.”
  • What We’re Reading: The Wall Street Journal explored the legal challenges that the reinstatement of net neutrality will likely face. The Washington Post examined how definitions for the modern internet have changed and call into question certain applications of net neutrality. Government Technology discussed how net neutrality has been heavily favored by the public and how its reinstatement is beneficial for consumers.

Congress Reauthorizes Section 702 Despite Pushback from Lawmakers and Privacy Experts

  • Summary: Despite continued division among lawmakers, Congress reauthorized Section 702 of the Foreign Intelligence Surveillance Act (FISA), avoiding a prolonged lapse and extending the program for an additional two years. Section 702 permits the US government to collect digital communication information to surveil non-US targets based outside of the county. The provision has been heavily criticized by lawmakers and privacy experts due to the program’s lack of warrant requirements when collecting information on US citizens who are in contact with non-US targets. On April 12, the House passed a two-year reauthorization of Section 702 by a final vote of 273 -147 following a cutback from its originally intended five year extension. A separate vote to add a warrant requirement failed to pass, producing a rare 212-212 tie with 13 House members abstaining and House Speaker Mike Johnson (R-LA) ultimately casting the decisive vote rejecting the amendment. Despite Sen. Dick Durbin’s (D-IL) last minute push to introduce a warrant requirement, the final bill did not include any additional amendments. While there was relative uncertainty in the Senate ahead of Section 702’s expiration, the Senate avoided a prolonged lapse and passed the reauthorization bill 60-34 on April 20. President Biden signed the legislation on April 20.
  • Stakeholder Response: Consensus was largely negative from civil rights organizations following Section 702’s reauthorization. Elizbeth Goitein, senior director of the Liberty and National Security Program of the Brennan Center for Justice at NYU Law, said that lawmakers “voted to reward the government’s widespread abuses of Section 702 by massively expanding its surveillance powers.” Kia Hamadanchy, senior policy counsel at the American Civil Liberties Union (ACLU) provided a list of “dangerous provisions” that were included in Section 702’s reauthorization and expressed disappointment that Congress did not address civil liberty concerns and “long-standing constitutional problems with this authority.”
  • Privacy advocates also raised concerns over a new provision in the reauthorization bill that expanded the definition of entities that would be required to share information with the government. The provision extended the program’s scope to include “any other service provider with access to communication equipment used to transmit or store communication.” Caitlin Vogus, deputy advocacy director for Freedom of the Press Foundation, claimed that the provision made it such that “anyone from a landlord to a laundromat could be required to help the government spy.” Jake Laperruque, deputy director of the Security and Surveillance Project at the Center for Democracy and Technology shared a similar sentiment, stating that the reauthorization bill “could be used to subject virtually any commercial landlord to receiving 702 orders.” Despite the pushback, Rep. Jim Himes (D-CT) said the provision was “narrowly tailored” and is not meant to be served to “janitors or Starbucks baristas.” The provision was included in the final version of the reauthorization bill signed by President Biden.
  • What We’re Reading: Noah Chauvin in The Dispatch analyzed how Section 702 has been used to spy on US citizens and suggest potential reforms to safeguard individual rights. Forbes discussed Section 702’s implications for civil liberties and the program continues to prioritize national security over “fundamental privacy rights.” In Tech Policy Press, Free Press policy counsel Jenna Ruddock wrote that “there are particularly clear risks in expanding a government surveillance authority that has been abused to surveil protesters, immigrants, political candidates, and even journalists.”

Introduction of the American Privacy Rights Act Revives Federal Privacy Debates

  • Summary: In early April, House Committee on Energy and Commerce Chair Cathy McMorris Rodgers (R-WA) and Senate Committee on Commerce, Science and Transportation Chair Maria Cantwell (D-WA) introduced a discussion draft of the American Privacy Rights Act (APRA), a comprehensive national data privacy bill. APRA mirrors the American Data Privacy and Protection Act (ADPPA, H.R. 8152), which was introduced in the last Congress. APRA would require transparency from covered entities regarding the collection and use of consumer data, and grant consumers rights such as access, correction, deletion, and export of their data, as well as the ability to opt out of targeted advertising and data transfers. It also mandates data minimization practices, regulates the use of data in AI training, prohibits discrimination based on consumer data, and regulates the use of algorithms for consequential decisions. Enforcement authority for APRA would fall to the FTC and state attorneys general. The bill would also create a private right of action for individuals to “file private lawsuits against entities that violate their rights under this Act.” Finally, APRA would preempt most state laws, with carve-outs preserving “provisions of state laws related to employee privacy, student privacy, data breach notifications and health privacy” and “several rights to statutory damages under state law.” APRA’s introduction has revived familiar debates about state preemption and a private right of action, which would allow individuals to sue companies for violations of their rights under the law, among other concerns.
  • Stakeholder Response: APRA’s introduction sparked a range of responses from stakeholders. The House Energy & Commerce Subcommittee on Innovation, Data, and Commerce held a hearing on APRA and kids online safety bills. Members indicated optimism that Congress could pass a comprehensive data privacy bill before the November elections. In response to the publication of the discussion draft, R Street celebrated the bill, emphasizing the ways that APRA has evolved from ADPPA. The Lawyers’ Committee for Civil Rights Under Law also applauded the efforts to pass comprehensive privacy legislation. Microsoft’s Chief Privacy Officer Julie Brill spoke in favor of a federal privacy bill but did not comment on APRA specifically, calling for “consistent and robust protections for individuals and clarity for organizations who have otherwise faced varying obligations across state lines.” In contrast, the Electronic Frontier Foundation critiqued the bill, arguing that state preemption would prevent future stronger protections and that the bill’s provisions remain too weak to sufficiently protect consumers. The US Chamber of Commerce also criticized the bill, but from the opposite perspective, arguing for stronger state preemption language and against a private right of action.
  • What We’re Reading: Digiday published an explainer summarizing APRA, putting it in context within the larger international privacy ecosystem. Statescoop explored APRA’s potential impact on state privacy legislation. Tech Policy Press brought together experts to analyze APRA and published an explainer written by Perla Khattar. Tech Policy Press also shared a number of APRA analyses, including Joseph Jerome’s assessment of APRA’s data minimization provisions, Justin Brookman’s critique of APRA’s protections, Tim Bernard’s analysis of APRA’s impact on children online, and Mark MacCarthy’s suggestion for a regulatory addition to the law.

Tech TidBits & Bytes

Tech TidBits & Bytes aims to provide short updates on tech policy happenings across the executive branch and agencies, Congress, civil society, and industry.

In the executive branch and agencies:

  • 180 days after the signing of the AI Executive Order, the White House released a statement celebrating the implementation successes of federal agencies in the past six months. The statement highlighted the National Institute of Standards and Technology’s (NIST) release of draft documents on generative AI for public comment, the Department of Housing and Urban Development’s issuing of guidance on nondiscriminatory use of AI in housing, and the cross-agency development of worker empowerment principles and practices in AI deployment, among other efforts.
  • Accompanying the White House’s statement on AI Executive Order implementation, the Department of Commerce announced that NIST has released four draft publications to provide additional guidance on safe and responsible AI. NIST’s publications included two companion resources to NIST’s AI Risk Management Framework and Secure Software Development Framework designed to mitigate generative AI risks, documents offering guidance on promoting transparency in digital content, and a framework for global AI standards.
  • The Federal Trade Commission denied approval for the use of facial age estimation technology to confirm a user’s age under the Children’s Online Privacy Protection Rule (COPPA Rule). The application, submitted by Entertainment Software Rating Board, Yoti, and SuperAwesome for Commission, aimed to use the technology to fulfill the COPPA Rule’s parental consent requirement for online sites and services used by children under 13.
  • US Secretary of Commerce Gina Raimondo announced appointments for the US AI Safety Institute at NIST, including: Paul Christiano as Head of AI Safety, Adam Russell as Chief Vision Officer, Mara Campbell as Acting Chief Operating Officer and Chief of Staff, Rob Reich as Senior Advisor, and Mark Latonero as Head of International Engagement.

In Congress:

  • In a letter to Senate Majority Leader Chuck Schumer and other members of the Senate AI working group, a group of bipartisan senators including Sens. Mitt Romney (R-UT), Jack Reed (D-RI), Jerry Moran (R-KS), and Angus King (I-ME) unveiled a congressional framework to address “catastrophic” AI risks related to biological, chemical, cyber, and nuclear weapons. The framework would mandate AI developers to report large acquisitions or usage of computing hardware for AI development, ensure entities incorporate safeguards against extreme AI risks, and require evaluation and licensing before AI systems are deployed. The framework also recommended the creation of a new federal oversight body composed of subject matter experts, skilled AI scientists, and engineers which will oversee the implementation of new safeguards against AI risks.
  • The House Energy & Commerce Subcommittee on Communications and Technology held a hearing on Section 230, where academics discussed free speech and content moderation implications of the law.
  • The House Administration Committee recently announced that they have approved the use of ChatGPT Plus for certain committee staffers and held initial training on how to properly use the tool. The announcement is part of the committee’s focus on providing public transparency on the use of AI by House offices and legislative branch agencies.

In civil society:

  • More than 200 civil society organizations, researchers, and journalists signed a letter to executives at leading technology companies including Google, Meta, Reddit, YouTube, among others, urging them to reinstate election integrity policies and reinforce safety measures on their platforms to combat global extremism, threats to democracy, and harmful disinformation.
  • The Artist Rights Alliance published an open letter signed by over 200 artists urging “AI developers, technology companies, platforms and digital music services to cease the use of artificial intelligence (AI) to infringe upon and devalue the rights of human artists.”
  • SAG-AFTRA, a union representing thousands of actors, announcers, broadcast journalists, and other media professionals reached an agreement with leading record labels including Warner Music Group, Sony Music Entertainment, Universal Music Group, and others over protections for artists against certain uses of AI technology.
  • The Center for AI Policy (CAIP) released a model for AI legislation designed to address catastrophic AI harms and risks such as the production of weapons of mass destruction or the ability to disrupt critical infrastructure. CAIP’s legislative model includes suggestions for creating a new federal agency focused on AI risks, expanding the White House’s emergency powers, and bolstering regulatory capabilities against AI developers.
  • Organizers from Cambridge Local First, Tech Policy Press, and Integrity Institute released an updated Technology Policy Tracker that aims to provide a comprehensive overview of major technology policies and legislation across federal, state, and international foci.
  • Georgetown University’s Emerging Technology Observatory released a new study that found that only 2 percent of global AI research is dedicated towards AI safety, with only 5 percent of US-based AI research going towards understanding safety.

In industry:

  • Google announced a $15 million investment in AI skills training for developing countries. A report outlining a roadmap for developing countries to leverage AI technologies also accompanied the announcement.
  • BSA | The Software Alliance, an advocacy coalition of tech companies, published recommendations for policymakers for responsible AI, including “encouraging global harmonization, implementing strong corporate governance practices to mitigate AI risks, protecting privacy,promoting transparency, promoting multiple development models, and promoting multiple development models,” among others.
  • A group of major tech companies, including Amazon, Google Meta, Microsoft, OpenAI, and others, committed to a slate of online child sexual abuse prevention principles designed to protect kids from the harms of their generative AI products.
  • More than 80 leading AI and technology advocacy organizations from industry, civil society, academia, and other fields sent a letter to Congress urging lawmakers to prioritize NIST funding to support the agency's efforts to “advance AI research, standards, and testing, including through the agency’s recently established U.S. AI Safety Institute.”

New and Updated AI EO RFIs and Public Comments

  • The National Institute of Justice (NIJ) requested input through written feedback from the public regarding section 7.1(b) of Executive Order 14110, titled “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” NIJ aims to gather insights that can contribute to a report focusing on using artificial intelligence (AI) within the criminal justice system. The comment period ends on May 28.
  • The Department of Commerce’s U.S. Patent and Trademark Office requested public comments seeking feedback on how AI could “affect evaluations of how the level of ordinary skills in the arts are made to determine if an invention is patentable under U.S. law.” The comment period ends on July 29.

Other New Legislation and Policy Updates

The following bills made progress in April:

  • Fourth Amendment Is Not For Sale Act (H.R.4639, sponsored by Reps. Warren Davidson (R-OH), Jerrold Nadler (D-NY), Andy Biggs (R-AZ), Zoe Lofgren (D-CA), Ken Buck (R-CO), Pramila Jayapal (D-WA), Thomas Massie (R-KY), and Sara Jacobs (D-CA)): This bill would ban law enforcement and intelligence agencies from buying sensitive information from third party data brokers that might otherwise require a warrant to obtain. The Fourth Amendment Is Not For Sale Act passed the House 220-197, but has not yet been taken up by the Senate. The Biden administration announced its opposition to the bill.

The following bills were introduced in April:

  • The Future of AI Innovation Act (S. 4178, sponsored by Sens. Maria Cantwell (D-WA), Todd Young (R-IN), Marsha Blackburn (R-TN), and John Hickenlooper (D-CO)): This bill would formally authorize the NIST AI Safety Institute to develop voluntary “performance benchmarks, evaluations and transparency documentation standards for public and private sector AI systems,” and direct federal agencies to make curated datasets widely available for public use and the creation of AI testbeds to evaluate AI systems.
  • Generative AI Copyright Disclosure Act of 2024 (H.R.7913, sponsored by Rep. Adam Schiff (D-CA)): This bill would mandate that companies using copyrighted material in training datasets for generative AI systems submit a detailed summary of those works to the US Copyright Office. Additionally, it would establish a publicly accessible online database maintained by the Register of Copyrights to contain these disclosures.
  • Reforming Intelligence and Securing America Act (H.R.7888, sponsored by Rep. Laurel Lee (R-FL)): This bill would reform and reauthorize the Foreign Intelligence Surveillance Act (FISA) so that it balances national security and privacy concerns when addressing abuses by the intelligence community, such as unwarranted surveillance queries of US persons. It would impose new restrictions on surveillance, including limits on querying collected information and requiring FBI approval for certain queries. Additionally, it would prohibit political involvement in query approvals, mandate sworn statements for surveillance orders, and increase penalties for FISA-related offenses.
  • Emerging Innovative Border Technologies Act (H.R.7832, sponsored by Reps. Lou Correa (D-CA) and Morgan Luttrell (R-TX)): This bill would require the Secretary of Homeland Security, along with relevant officials, to submit a plan to Congress outlining plans for the research, identification, integration, and deployment of innovative technologies, including AI, into border security operations.
  • Children and Teens’ Online Privacy Protection Act (H.R.7890, sponsored by Reps. Tim Walberg (R-MI), Kathy Castor (D-FL), Larry Bucshon (R-IN), Anna Eshoo (D-CA), Earl Carter (R-GA), Seth Moulton (D-MA), Neal Dunn (R-FL), Jake Auchincloss (D-MA), Gus Bilirakis (R-FL), and Russ Fulcher (R-ID)): This bill would amend the Children’s Online Privacy Protection Act of 1998 to further strengthen protections relating to the online collection, use, and disclosure of personal information of children and teens. It would do so by prohibiting online platforms, mobile applications, and connected devices from allowing minors to post personal data content. The bill would also mandate platforms to implement deletion mechanisms for such content and prohibit them from deleting user content containing personal information if republished by others.
  • Child Exploitation and Artificial Intelligence Expert Commission Act of 2024 (H.R.8005, sponsored by Reps. Russell Fry (R-SC), Michael Lawler (R-NY), Donald Davis (D-NC), Mary Miller (R-IL), Zachary Nunn (R-IA), André Carson (D-IN), Ashley Hinson (R-IA), Don Bacon (R-NE), Alma Adams (D-NC), Claudia Tenney (R-NY), Anthony D’Esposito (R-NY), and Gabe Vasquez (D-NM)): This bill would establish a commission tasked with investigating and developing recommendations to improve law enforcement's ability to prevent, detect, and prosecute child exploitation crimes committed using AI.

Public Opinion on AI Topics

A survey conducted by YouGov between March 14-18, 2024, polled 1,073 US adult citizens about their feelings on AI:

  • 54 percent of respondents describe their feelings towards AI as “cautious.” Additionally, 49 percent are concerned, 40 percent are skeptical, 29 percent are curious, and 22 percent are scared. Respondents were prompted to select all that apply.
  • 44 percent of Americans believe it’s likely that AI will eventually surpass human intelligence, with 22 percent considering this very likely. Moreover, 14 percent believe AI is already more intelligent than people.
  • Regarding the potential of AI to cause catastrophic consequences, 15 percent are very concerned about AI causing the end of humanity, while 24 percent are somewhat concerned, 25 percent are not very concerned, and 19 percent are not concerned at all.
  • A majority of those polled (55 percent) do not trust AI to make unbiased decisions. Similarly, 62 percent do not trust it to make ethical decisions, and 45 percent don’t trust it to provide accurate information.
  • Adults under 30 are more likely than older generations to trust AI to make unbiased decisions (49 percent), ethical decisions (42 percent), or provide accurate information (57 percent).

A similar survey conducted by YouGov of 1,066 US adult citizens between March 15-18, 2024 about feelings towards AI’s impact on industry found that:

  • 32 percent of respondents feel that the effects of artificial intelligence (AI) on society will be somewhat or very positive, while 47 percent feel the effects will be somewhat or very negative. Additionally, 15 percent feel the effects will be neither positive nor negative.
  • Among respondents employed full or part-time, 36 percent were somewhat or very concerned about the possibility of AI resulting in job loss, reduced hours, or salary cuts, while 59 percent were not very or at all concerned.
  • When asked to consider the next 30 years, 50 percent of respondents employed full or part-time believe jobs like theirs will primarily be done by humans, while 27 percent thought AI would primarily handle such tasks.

In a national survey conducted by the News/Media Alliance on AI topics from February 3-11, 2024 of 1,800 registered voters, the organization found:

  • 95 percent of respondents in the US report recent exposure to information about AI.
  • 66 percent of those polled expressed discomfort with AI, while 31 percent felt comfortable.
  • 72 percent of people surveyed support efforts to limit the power of AI, with 57 percent supporting compensation for news publishers whose content is used to train AI.
  • Primary concerns included: AI’s potential to increase misinformation (66 percent), undermine trustworthy news sources (59 percent), threaten election integrity (60 percent), and facilitate plagiarism (58 percent).

The Artificial Intelligence Policy Institute surveyed an online sample of 1,114 respondents from March 25-26, 2024, and found that:

  • 63 percent of respondents support policies requiring AI labs to impose strict cybersecurity measures, develop containment plans for dangerous models, share predictions with the government, and leverage outside experts to check their systems.
  • 69 percent of those polled support requiring “dual use” AI model developers to prevent harm or face lawsuits.
  • 68 percent of people surveyed believe in developing regulations to prevent harm before it occurs, while 14 percent support waiting to see how AI technology develops.
  • 75 percent of respondents see powerful AI models as a national security concern and urge measures to prevent misuse.

AuthorityHacker conducted a survey of 2,000 individuals in the United States, aged 18-55, in March 2024 on AI topics. They found that:

  • 80 percent of respondents advocate for strict AI regulations, even if it means slowing down technological innovation.
  • 56 percent of those surveyed view current regulatory frameworks for AI risk management as effective, with 15 percent considering them “very effective.”
  • 41 percent of respondents percent support international standards for AI legislation, and 41 percent support a combination of international standards and local laws.
  • 82 percent of polled people express discomfort about AI training using personal data.
  • 84 percent of participants believe that AI companies should pay royalties for copyrighted content used in AI model training.

Public Opinion on Other Topics

Chamber of Progress conducted a survey between January 5-12, 2024, of 4,637 registered voters across eight battleground states on technology policy opinions (Arizona, Georgia, Michigan, Nevada, New Hampshire, North Carolina, Pennsylvania, and Wisconsin). After considering results from each battleground state, Morning Consult identified key overarching themes on tech among voters:

  • Over half of people polled in each battleground state express concerns about technology companies but still favor their apps/products and desire more technology jobs in their communities.
  • An overwhelming majority of respondents across all battleground states surveyed say the next President should prioritize technology jobs and services over regulating Amazon, Google, and Apple.
  • At least eight in ten respondents supported Apple and Google reviewing the apps available on their devices for security risks and restricting apps based on any uncovered safety risks.

The American Psychiatric Association surveyed 2,204 adults from March 11-14, 2024, and found that:

  • 41 percent of respondents were neutral on whether social media is harmful or helpful to their mental health, while 31 percent said it does more harm than good, and 29 percent said it does more good than harm.
  • 55 percent of participants have used social media to find mental health information, with the following breakdown of popular platforms being used: YouTube (34 percent of respondents), Facebook (31 percent), Instagram (19 percent), and TikTok (19 percent).
  • When asked about social media’s effect on various facets of their life, respondents answered in the following ways:
    • Regarding their relationships with family and friends: 30 percent said social media has helped, while 14 said it has hurt. 46 percent reported no impact, and nine percent said they are unsure.
    • On their self-esteem: 23 percent reported that social media has helped, while 27 percent said it has hurt. 51 percent reported no impact, and 9 percent were unsure.
    • On their lifestyle: 26 percent said social media has helped, while 13 percent said it has hurt. 50 percent reported no impact, and 11 percent were unsure.
    • On their health: 22 percent said social media has helped, while 13 percent said it has hurt. 54 percent reported no impact, and 10 percent were unsure.

We welcome feedback on how this roundup could be most helpful in your work – please contact Alex Hart with your thoughts.

Authors

Rachel Lau
Rachel Lau is a Senior Associate at Freedman Consulting, LLC, where she assists project teams with research, strategic planning, and communications efforts. Her projects cover a range of issue areas, including technology policy, criminal justice reform, economic development, and diversity and equity...
J.J. Tolentino
J.J. Tolentino is a Senior Associate at Freedman Consulting, LLC where he assists project teams with research, strategic planning, and communication efforts. His work covers issues including technology policy, social and economic justice, and youth development.

Topics