Claire Carey and Kennedy Patlan are senior associates at Freedman Consulting, LLC, where they work with leading public interest foundations and nonprofits on technology policy issues.
On May 2nd, a leaked Supreme Court draft decision on abortion access sparked immediate response not just from those concerned about women’s reproductive rights but also from a range of political, private, and public interest groups monitoring the implications for data privacy and surveillance. Between the Supreme Court blocking Texas’s social media censorship law from going into effect and Twitter launching a new misinformation policy, content moderation was also a hot topic in May.
Federal agencies were also featured in the May news, as the Department of Homeland Security’s Immigration and Customs Enforcement (ICE) division was found to be utilizing previously unknown, broad surveillance tactics utilizing data from private brokers, and the Department of Justice and the Equal Employment Opportunity Commission released guidance on preventing AI-enabled hiring biases.
The below analysis is based on techpolicytracker.org, where we maintain a comprehensive database of legislation and other public policy proposals related to platforms, artificial intelligence, and relevant tech policy issues.
Location Privacy & Surveillance
- Summary: With Politico revealing that the Supreme Court may soon overturn Roe v. Wade, privacy advocates are warning of the heightened dangers of health and location data collection as people seek abortion and reproductive health access in states where it may soon no longer be legal. VICE reported evidence that commercial data brokers sell location information taken from the phones of people at abortion clinics, which resulted in pressure for one such data broker, SafeGraph, to stop offering data related to Planned Parenthood and other family planning centers. Privacy experts are concerned about the ways that data could be leveraged in prosecutions if abortion becomes criminalized, as anyone could potentially purchase this data to learn more about a patient’s location or abortion-related search histories. In an NPR article, advocates from Fight for the Future and the Center for Democracy & Technology weigh in on how data taken from period-tracking apps can also be used to penalize someone considering an abortion.
- Stakeholder Response: In response to the draft SCOTUS decision, Senate Democrats sent a letter to FTC chair Lina Khan, urging the FTC to protect the data privacy of women “seeking reproductive healthcare” and detailed questions to be answered by June 10. Led by Senator Ron Wyden (D-OR), Democrats also called on Google to stop collecting cell phone location data to prevent the surveillance of women seeking abortion access. Finally, Senate Democrats also set a deadline of May 31 for SafeGraph Inc and Placer.ai to provide information about any collection or sales of cellphone data tied to visits to abortion clinics. Addressing some of these data privacy concerns, Senator Wyden also pointed to his previously introduced legislation, the Fourth Amendment is Not For Sale Act, which would ban U.S government and law enforcement agencies from buying location data and other personal information from third-party data brokers without a warrant. On the public advocacy front, Human Rights Watch and others called on Congress to pass federal data privacy protections, the Electronic Frontier Foundation released digital privacy tips for those involved in abortion access. Additional letters were penned by Tech for the Future and the Tech Oversight Project, among others. Attorney General Letitia James (New York) wrote an alert aimed at health app users and Attorney General Rob Bonta (California) wrote an alert aimed at companies regarding privacy implications for pregnant app users. Representing business owners and entrepreneurs, the Chamber of Commerce sent a letter to the Senate Commerce Committee and House Energy and Commerce Committee urging members not to include a private right of action in any new data privacy legislation.
- What We’re Reading: The debate about mass data collection in a post-Roe world also points to larger questions about data and surveillance. This month, the Center on Privacy and Technology at Georgetown Law released its report American Dragnet: Data-Driven Deportation in the 21st Century, a two-year investigation into Immigration and Customs Enforcement’s contracting and procurement record, which shows that ICE has expanded its surveillance by accessing and using massive data sets from private data brokers to assist in deportations. ICE was found to have access to 3 out of 4 adults’ driver’s license data, as well as 3 out of 4 adults’ electricity, water, gas, and internet records. In other privacy news, The Center for Democracy & Technology released the report Ableism And Disability Discrimination in New Surveillance Technologies, which identifies how surveillance technologies have harmful impacts on disabled people within the areas of education, criminal legal system, health care, and the workplace.
Texas Spotlights Censorship and Content Moderation
- Summary: A flurry of legal activity took place this month in Texas and on the national stage spotlighting social media censorship and content moderation. In a 5 to 4 decision, the Supreme Court blocked HB 20 (the court’s order gave no reasons), Texas’ social media censorship law, from going into effect. Earlier in the month, the 5th U.S. Circuit Court of Appeals had caught many by surprise and ruled that HB 20 could go into effect. This law would have allowed Texas residents to sue major social media platforms if they thought that the platform removed their content due to their “viewpoint.” Last year, Florida passed a similar law, SB 7072, which bans social media platforms from blocking political candidates. The Florida law was initially blocked through an injunction issued by a federal judge. A panel of 11th Circuit judges upheld the injunction this month, citing provisions in the law that were “likely unconstitutional.”
- Stakeholder Response: Though HB 20 was introduced in Texas, the implications for national tech policy resulted in wide ranging legal advocacy. In response to the 5th Circuit decision, industry groups representing social media platforms filed an emergency application for immediate relief, aiming to reverse the Fifth Circuit decision by bringing the case to the Supreme Court. Coalitions of advocacy and public interest groups including groups like Center for Democracy & Technology, Electronic Frontier Foundation, National Coalition Against Censorship, NAACP and Anti-Defamation League, and Reporters Committee for Freedom of the Press and ACLU filed amicus briefs calling on the Supreme Court to reblock HB 20. Ultimately, the Supreme Court reinstated a block on the law issued by a lower court, although four justices chose to dissent.
- What We’re Reading: The New York Times reported on the Supreme Court’s decision, including coverage of the dissent. Former FCC Chairman Tom Wheeler writes for Brookings on the common threads between internet free speech, Donald Trump’s influence in Elon Musk’s acquisition of Twitter, the white supremacist Buffalo shooting, and Texas’ social media censorship law going into effect. Dr. Welton Chang, former Chief Technology Officer at Human Rights First, wrote for Tech Policy Press on how the Buffalo shooting highlighted the limitations of current content moderation protocols. And in other content moderation news, NYU Center for Social Media and Politics researchers found that a prominent plug-in tool that labels misinformation may not necessarily push readers toward reliable resources.
AI Hiring Discrimination
- Summary: On May 15, the DOJ and EEOC issued guidance for public and private employers regarding the use of software and algorithmic decision-making in assessing job applicants and employees and compliance with the Americans with Disabilities Act. This guidance comes as increasingly large numbers of employers are turning to AI to facilitate hiring, even as a growing body of research demonstrates the technology’s capacity for exacerbating bias and discrimination in hiring. The guidance is part of the EEOC’s larger Artificial Intelligence and Algorithmic Fairness Initiative, which seeks to (among other things) “issue technical assistance to provide guidance on algorithmic fairness and the use of AI in employment decisions.”
- Stakeholder Response: In a briefing with reporters about the newly published guidance, EEOC Chair Charlotte Burrows stated, “We cannot let these tools become a high-tech pathway to discrimination.”At the same briefing, Assistant Attorney General for Civil Rights Kristen Clark described the guidance as “sounding an alarm regarding the dangers tied to blind reliance on AI and other technologies that we are seeing increasingly used by employers.” The Electronic Privacy Information Center’s Ben Winters described the announcement as “put[ting] employers on notice that the agencies are expecting them to have a higher standard for the vendors they use.” Finally, the Center for Democracy & Technology and the American Association of People with Disabilities wrote a joint letter applauding the DOJ and EEOC for creating the AI-related guidance.
- What We’re Reading: NPR and Wired reported on the DOJ and EEOC joint guidance and the ways hiring algorithms can discriminate against people with disabilities. The DOJ and EEOC guidance also brings to mind recommendations from a coalition of public interest organizations, outlined in their 2021 memo on technology’s role in hiring discrimination and steps federal actors should take. Brookings contextualizes the use of AI in hiring practices and explains how NYC is implementing audits of hiring vendors to combat bias in the AI software the city uses in its processes and tools. Vice’s Motherboard reports that four Democratic senators have called on the FTC to “investigate evidence of deceptive statements made by ID.me” regarding the company’s use of one-to-one facial recognition (involving one photo and one point of reference) versus one-to-many facial recognition (involving one photo and a database of references). In their letter, Democratic senators posit that ID.me misled both consumers and state and federal officials by falsely asserting that ID.me does not use one-to-many facial recognition.
Other New Legislation and Policy Updates
Several new pieces of legislation were introduced this month, focusing on content moderation, competition, and privacy. Notable bills to follow include:
- 21st Century FREE Speech Act (S.1384 – Sponsored by Sen. Bill Hagerty, R-TN and H.R.7613 – Sponsored by Rep. Marjorie Taylor Greene, R-GA): This Republican bill would repeal Section 230, require the largest platforms to disclose their content moderation practices to users, and update common-carrier laws to treat Big Tech platforms as such.
- Digital Platform Commission Act (S.4201 – Sponsored by Sen. Michael Bennet, D-CO and H.R.7858 – Sponsored by Rep. Peter Welch, D-VT): This Democratic bill introduced in both chambers, would create a five member federal commission hold hearings, conduct research/investigations, and engage in public-rulemaking to establish oversight and rules that digital platforms will have to follow to maintain transparency on content moderation and to “promote competition and protect consumers.” As reported by the Washington Post and Tech Policy Press, this act would also create a “Code Council” of technologists and public interest technology leaders who would create technical standards for the commission to follow and a “Research Office” that would work with outside scholars to conduct research on the platforms.
- Competition and Transparency in Digital Advertising Act (S.4258 – Sponsored by Sen. Mike Lee, R-UT, Sen. Amy Klobuchar, D-MN, and Sen. Richard Blumenthal, D-CT, and H.R.7839 – Sponsored by Rep. Ken Buck, R-CO, Rep. Burgess Owens, R-UT, Rep. Pramila Jayapal, D-WA, Rep. David Cicilline, D-RI, and Rep. Matt Gaetz, R-FL): This bipartisan bill introduced in both chambers, would ban platforms that process more than $20B in digital ad sales from owning and selling parts of the online ad market. Currently, Google, Facebook, and Amazon exceed this $20B threshold.
- Protecting Military Service Members’ Data Act of 2022 (S.4281 – Sponsored by Sen. Bill Cassidy, R-LA, Sen. Elizabeth Warren, D-MA, and Sen. Marco Rubio, R-FL): This bipartisan bill would give the FTC the power to take civil action and obtain damages against data brokers that sell U.S. military service members’ data to other countries.
- American Innovation and Choice Online Act: (S.2992 – Sponsored by Sen. Amy Klobuchar, D-MN, and Sen. Chuck Grassley, R-IA): This bipartisan antitrust bill, first introduced in October 2021, was amended to include stronger cybersecurity and privacy protections. In the updated bill, platforms can avoid being penalized if they can prove that anticompetitive practices protect user privacy.
- United States Innovation and Competition Act (USICA): (S.1260 – Sponsored by Sen. Charles Schumer, D-NY, among 13 other senators): Punchbowl News reported that Senate leadership is aiming to file the finished conference report reconciling the differences between the Senate USICA and House America COMPETES Act by June 21st. A coalition of small businesses urged senators to leave out the SHOP SAFE Act in the final version of the competition bill.
Public Opinion Spotlight
This month Morning Consult conducted a poll of 2,210 adults on privatization of social media, free speech, misinformation, and hate speech and found that:
- “Over a third of Twitter users said they believe political debate and hate speech would get worse if a social media platform went from public to private.”
- “3 in 5 Twitter users and U.S. adults overall said they believe social media companies have the right to ban users if they violate policies on the types of content they share.”
- “55% of Twitter users said platforms are responsible for crafting and implementing policies about how political content is shared.”
The annual Axios Harris Poll 100 reputation rankings, which assesses the reputations of the “100 most visible companies in America” was published this month. This poll was conducted based on a survey of 33,096 Americans taken from March 11-April 3, 2022. Regarding the overall reputation of technology companies and their role in content moderation and free speech, the poll found that:
- Social media companies were some of the worst-ranked of the 100 companies, with TikTok ranked 94th, Meta/Facebook ranked 97th, and Twitter ranked 98th. On the other hand, other tech companies were given some of the highest ratings, with Amazon ranked 8th, Microsoft 15th, and Apple 21st.
- Americans see publishers, such as the Washington Post, as more responsible for content than platforms, such as Facebook. Specifically, “57% of Americans say publishers are responsible for content that goes against their personal values, not platforms. While only 43% said platforms were more responsible.”
- There is a partisan divide in how Americans view free technology’s role in free speech: “72% of Democrats say tech companies promote free speech vs. only 41% of Republicans.”
– – –