Kennedy Patlan, Rachel Lau, and Carly Cramer work with leading public interest foundations and nonprofits on technology policy issues at Freedman Consulting, LLC. Alondra Solis, a Freedman Consulting Phillip Bevington policy & research intern, also contributed to this article.
This month, artificial intelligence continues to demand public attention and spark policy debate. Just as several prominent tech leaders, including Elon Musk and Bill Gates, called for a six-month hiatus in the development of more advanced AI technologies, the Biden administration took action through a National Telecommunications and Information Administration (NTIA) Request for Comment focused on “what policies will help businesses, government, and the public be able to trust that Artificial Intelligence (AI) systems work as claimed – and without causing harm.”. Meanwhile, Senate Majority Leader Chuck Schumer (D-NY) said he is leading a charge to introduce AI regulations on Capitol Hill.
At the agency level, the Federal Communications Commission (FCC) announced appointments for its recently launched Space Bureau and International Affairs. Meanwhile, the House Judiciary Committee subpoenaed the Federal Trade Commission (FTC) regarding its investigation into Twitter. An accompanying letter from Rep. Jim Jordan (R-OH) claimed that the agency “failed to comply with the committee’s request for all documents and communications related to the probe” and stated that the agency made “inappropriate and burdensome” demands of the company.
In privacy-related news this month, the Biden administration and the Department of Health and Human Services’ Office for Civil Rights proposed a new rule that aims to strengthen the Health Insurance Portability and Accountability Act. At the state level, 24 states have proposed or passed online privacy legislation, including most recently in Indiana and Iowa. This momentum continued after Utah passed a children’s privacy law in late March. At the national level, multiple kids’ online privacy bills were introduced in Congress.
In corporate news, Google faced the federal government in hearings over antitrust lawsuits brought by the Justice Department and a coalition of state attorneys general. The Ninth Circuit of the U.S. Court of Appeals found that Apple’s App Store did not violate federal antitrust law. This occurred in the midst of an active search for a new ranking member for the House Judiciary Committee’s Subcommittee on Antitrust, Commercial, and Administrative Law, following Rep. David Cicilline’s (D-RI) announced plan to step down from Congress. Meanwhile, NPR suspended its use of Twitter following the social media platform’s decision to label NPR a “government-funded” media outlet. NPR joins PBS, CBS News, and other media outlets who have suspended their Twitter accounts. Following the debacle, Twitter decided to stop labeling global media accounts as government-controlled or funded, with lasting implications for misinformation and propaganda, particularly from countries like China and Russia.
The below analysis is based on techpolicytracker.org, where we maintain a comprehensive database of legislation and other public policy proposals related to platforms, artificial intelligence, and relevant tech policy issues.
Read on to learn more about April U.S. tech policy highlights regarding agency efforts to protect consumer health information and attempts to reign in AI across the federal government.
AI Regulation Commands Attention Across the Federal Government
- Summary: April saw a wide variety of policymakers taking steps to develop and enforce regulations on AI systems. The National Telecommunications and Information Administration (NTIA) launched a request for comment on mechanisms to protect Americans from potential negative outcomes. The request sought input on policies that could support “mechanisms to create earned trust in AI systems.” Suggested topics included trust and safety testing, data access, incentivizing credible assurance of AI systems, and industry-specific approaches to sectors such as healthcare and employment. AI also drew attention on the Hill as Senate Majority Leader Chuck Schumer (D-NY) announced that his office is leading a congressional effort to develop legislation on AI regulations, circulating a broad framework that would require developers to disclose their algorithm’s data sources and ethical boundaries, among other information. The plan, which was launched on April 13, consists of four key guardrails intended to create a flexible, sustainable framework for AI regulation that allows for American competition while limiting negative impacts on consumers. Furthermore, the National Artificial Intelligence Advisory Committee issued 23 recommendations intended to ensure that AI systems are developed in a responsible manner while still contributing to American competition and innovation, arguing that balancing the technology’s benefits with its risks can result in economic, geopolitical, and economic benefits for the United States. Additionally, FTC Chair Lina Khan joined leaders from the US Department of Justice, the Consumer Financial Protection Bureau, and the Equal Employment Opportunity Commission in releasing a joint statement on AI. The statement declared that “existing legal authorities apply to the use of automated systems and innovative new technologies just as they apply to other practices” and pledged the agencies would use their regulatory and enforcement powers “to protect individuals’ rights regardless of whether legal violations occur through traditional means or advanced technologies.”
- Stakeholder Response: Microsoft issued a statement in support of NTIA’s request, arguing that “we should all welcome this type of public policy step to invite feedback broadly and consider the issues thoughtfully, and move expeditiously,” and Open AI has stated that it “believe[s] that powerful AI systems should be subject to rigorous safety evaluations.” In its response to the NTIA Request for Comment, the Leadership Conference on Civil and Human Rights highlighted the impact of algorithmic bias while celebrating federal momentum to regulate AI. After learning of Senator Schumer’s (D-NY) plans to regulate AI, at least four Republican Senators indicated potential interest in engaging in Schumer’s initiative.
- What We’re Reading: Dr. Alondra Nelson, former acting director of the White House Office of Science and Technology Policy, argued that lawmakers should take advantage of public momentum to urgently implement and adjust existing policies to govern the use of AI. Adam Conner of the Center for American Progress released a report on executive actions to advance AI policy, including ensuring agencies use all legal authorities to address AI harms. Anna Lenhart, a former Hill staffer and now a Policy Fellow at the Institute for Data Democracy and Politics at The George Washington University, is putting together a list of existing proposed legislation that pertains to generative AI. Stanford Institute for Human Centered Artificial Intelligence released its annual report, highlighting data to demonstrate developments in AI implementation, perception, and policy. The report found that reports of AI ethics violations had increased 26 times since 2012. Tech Policy Press’s podcast, The Sunday Show, discussed NTIA’s efforts with NTIA Senior Advisor for Algorithmic Justice Ellen P. Goodman.
Congress Explores Another Round of Kids’ Online Safety Bills
- Summary: Another wave of congressional efforts targeting kids’ online safety swept through this month: Representatives introduced the Strengthening Transparency and Obligations to Protect Children Suffering from Abuse and Mistreatment (STOP CSAM) Act of 2023 and the Protecting Kids on Social Media Act, and reintroduced the Eliminating Abusive and Rampant Neglect of Interactive Technologies (EARN IT) Act. Senate Judiciary Chair Sen. Dick Durbin (D-IL) led the introduction of the STOP CSAM Act of 2023 (S. 1199), which expands victim protection measures, directs tech companies to remove CSAM material, strengthens accountability and transparency measure for tech companies, and updates sentencing guidelines for online CSAM crimes. The Stop CSAM Act also states that civil lawsuits through the law could target tech companies for third party content, creating a Section 230 carveout.
- Sens. Richard Blumenthal (D-CT) and Lindsay Graham (R-SC) re-introduced the EARN IT Act (S. 1207), which would create an exception in Section 230 to establish greater civil liability for hosting CSAM material and found a commission to develop best practices for digital platforms to combat CSAM materials. This is the third introduction of the EARN IT Act in three years; the last iteration of the EARN IT Act in the 117th Congress was reported out of the Senate Judiciary Committee. Reps. Ann Wagner (R-MO) and Sylvia Garcia (D-TX) also re-introduced the House version of the EARN IT Act (H.R. 2732).
- The Stop CSAM Act and EARN IT Act were scheduled to be marked up by the Senate Committee on the Judiciary on April 27, but markups were delayed due to other congressional business.
- Sens. Brian Schatz (D-HI), Tom Cotton (R-AR), Chris Murphy (D-CT) and Katie Britt (R-AL) also introduced the Protecting Kids on Social Media Act in late April. The bill bans kids under 13 from using social media platforms, stops platforms from using algorithms to select content for minors, and creates a pilot program for age verification credentials for enrollment on platforms. The bill requires not only parental consent to use social media for kids ages 13-17, but also requires companies to “take reasonable steps beyond merely requiring attestation” for age verification.
- Stakeholder Response: The Electronic Frontier Foundation responded to the introduction of the Stop CSAM Act, arguing that the bill threatens data security and free speech by creating challenges to establishing end-to-end encryption services, undermining Section 230, and requiring providers to remove content without due process. In contrast, ECPAT-USA, an anti-child trafficking organization, endorsed the Stop CSAM Act, pushing social media companies to “listen to child safety experts and act upon recommendations made through evidence-based research and youth testimonials.”
- The EARN IT Act has also sparked responses from civil society groups and trade associations. The Electronic Frontier Foundation, NetChoice, Foundation for Economic Education, and American Action Forum argued that the law does not substantially protect children online while violating the privacy and free speech rights of the greater public. Fight for the Future organized a campaign urging constituents to write to their senators opposing the EARN IT Act. As this new push for kids’ online privacy legislation at the federal level continued, states also passed laws limiting kids’ social media use and online access: Arkansas, California, Connecticut, Louisiana, Maryland, Minnesota, New Jersey, Ohio, and Texas are all considering bills that limit kids’ access to the internet and social media or have already passed legislation for child safety online.
- What We’re Reading: Politico wrote about state legislation regulating social media and children. The Washington Post reported on child influencers and celebrities online and the potential for kids’ online privacy laws to protect them. An op-ed in The New Yorker detailed the debate on kids’ participation online from a parent’s perspective.Tech Policy Press published the details on the Protecting Kids on Social Media Act.
Inside Agency Efforts to Protect Patient Privacy Online
- Summary: In mid-April, the Department of Health and Human Services (HHS) proposed a new rule to strengthen privacy protections under the Health Insurance Portability and Accountability Act (HIPAA), “prohibiting doctors and healthcare providers from disclosing information related to reproductive health care for the purposes of investigating, prosecuting or suing an individual for a legal abortion.” The Health Insurance Portability and Accountability Act (HIPAA) was enacted as federal law in 1996 to protect private patient health information, guaranteeing that information would not be disclosed without patient consent. However, the law has struggled to protect patients in a rapidly developing digital health ecosystem and in the emergent use of technology used by traditional healthcare organizations in the United States.
- The proposed rule will now undergo a 60-day comment period. Reps. Anna Eshoo (D-CA) and Sara Jacobs (D-CA), who introduced the SAFER Health Act (H.R. 459), expressed support for the proposed rule. The SAFER Health Act limits doctors’ and insurance companies’ disclosure of patients’ healthcare information related to abortion or pregnancy in a legal proceeding without explicit consent.
- This proposed rule is the latest in HHS commitments to sharpen understanding of HIPAA compliance and patient privacy in technology, as the agency’s Office for Civil Rights released new guidance last December. However, the December guidance is intended for organizations who fall under HIPAA purview (e.g., doctors, hospitals, insurers, and partner contractors), which excludes other tech companies and organizations who may also collect some forms of user health information. Of the guidance and related actions, Andrew Crawford, senior counsel at the Center for Democracy and Technology said, “Unfortunately, right now, the burden falls to each consumer to do their homework, and to try to figure out where data about their health is not only being generated, where it’s being stored, and who it might be being shared with.”
- Separately, the FTC has also played a role in protecting patient rights in recent months through its enforcement actions against health-tech companies BetterHelp and GoodRx. Of these investigations, Samuel Levine, the FTC’s Bureau of Consumer Protection Director said, “Firms that think they can cash in on consumers’ health data because HIPAA doesn’t apply should think again.” As a result of agency activity, companies working in health care have been rapidly deciphering policy implications for their industry and are reportedly scaling back data collection and marketing efforts.
- Related happenings: In early April, HHS emphasized that the agency was investigating claims that health care provider websites were using embedded trackers to send users’ private data to third parties. Politico quoted Melanie Fontes Rainer, HHS’s director of the Office for Civil Rights, who called the issue “problematic” and “widespread” at the International Association of Privacy Professionals summit held in Washington, DC this month. As a result, some hospitals have already begun disclosing their use of web trackers.
- At the state level, Washington state passed the My Health My Data Act in April, making the state the first to successfully enact a comprehensive health privacy law. The act will regulate health care providers and health plans as well as businesses collecting health-related information. The policy expands Washington’s oversight of consumer privacy rights and related enforcement actions, and grants residents more control over how their health information is used.
- What We’re Reading: Axios reviewed sample actions that private companies are taking in the absence of any government oversight and covered the gaps in current health privacy legislation. Forbes shared more details on the Health Affairs report and also reviewed other instances of hospitals sharing health information with social media companies. In related news, HHS announced proposed privacy measures that would provide protections for patients’ reproductive data.
New Legislation and Policy Updates
- The Journalism Competition and Preservation Act (JCPA) (S. 1094, sponsored by Sens. Amy Klobuchar (D-MN) and John Kennedy (R-LA)): The JCPA was reintroduced at the end of March. The bill would allow news publishers with fewer than 1,500 full time employees to negotiate with online platforms regarding the use of the news publishers’ content. News publishers and broadcasters would be authorized to form joint negotiations with other news providers to negotiate pricing, terms, and the conditions by which online platforms can use their content. Additionally, all parties would be required to negotiate in good faith, and the proposed law would not modify any antitrust laws in the process of negotiation. The Government Accountability Office (GAO) would be responsible for studying the impact of negotiations on access to information and employment for journalists. The JCPA was reported out of the Senate Judiciary Committee 15-7 in the last Congress.
- Social Media Accountability Act of 2023 (H.R. 2635, sponsored by Rep. George Santos (R-NY)): This bill would amend Section 230 of the Communications Act of 1934 to remove social media companies from liability protections and stop social media companies from de-platforming U.S. citizens “based on the social, political, or religious status of such citizens unless there was a policy violation of the company.” Bill text was not publicly available at the time of publishing.
- Online Privacy Act of 2023 (OPA) (H.R. 2701, sponsored by Reps. Anna Eshoo (D-CA) and Zoe Lofgren (D-CA)): The OPA was reintroduced in the House, and it seeks to establish comprehensive privacy protections creating user data rights allowing users to request correction or deletion of their data, limiting companies’ ability to collect and use user data, and establishing a digital privacy agency for privacy law enforcement.
- Honest Ads Act (H.R. 2599, sponsored by Rep. Derek Kilmer (D-WA)): This bill, re-introduced from the 116th Congress, would require online platforms displaying political advertisements to display a notice of the sponsor for the advertisement. Political ads sold online would be covered by the same governing laws that apply to ads sold on television, the radio, and via satellite. The bill would also require platforms to make a reasonable effort to ensure communications efforts are not purchased by a foreign national to prevent foreign online influence campaigns.
Public Opinion Spotlight
A Pew Research Center survey on the use of AI in hiring was published this month. The poll surveyed 11,004 U.S. adults from December 12th-December 18th, 2022, and found that:
- 62 percent of respondents believe artificial intelligence will have a major impact on jobholders overall in the next 20 years
- 71 percent of respondents oppose employers using AI to make final hiring decisions
- 47 percent of respondents think that AI would do better than humans at evaluating all job applicants in the same way, while 15 percent of respondents think that humans would be better
- 66 percent of respondents say they would not want to apply for a job with an employer that used AI in hiring decisions.
The Wall Street Journal conducted a poll from April 11-17, 2023 with 1,500 voters on whether they support or oppose a ban on TikTok. It found that:
- 46 percent of respondents support a nationwide ban of TikTok, while 35 percent oppose it
- 52 percent of voter favor selling TikTok to a U.S. buyer
- 62 percent of Republicans favor a ban on TikTok, compared to 33 percent of Democrats
- 59 percent of people age 65 and over support banning TikTok
- 37 percent of voters 18-34 favor a ban, but 48 percent oppose it
Morning Consult published a poll conducted from March 31st-April 2nd, 2023 with 2,200 U.S. adults on whether tech companies can be held legally liable for content posted on their platforms. It found that:
- 45 percent of adults have not seen, read, or heard anything at all about Section 230
- 67 percent of adults believe companies should be legally liable for some or all content on their platform
- 48 percent of respondents believe content would be less dangerous if companies were legally liable for the content on their platforms
- 34 percent of respondents believe the influence that tech companies have over political speech would decrease if companies were legally liable
– – –
We welcome feedback on how this roundup and the underlying tracker could be most helpful in your work – please contact Alex Hart and Kennedy Patlan with your thoughts.