August 2024 US Tech Policy Roundup
Rachel Lau, J.J. Tolentino, Gabby Miller, Ben Lennett, Prithvi Iyer / Aug 30, 2024Rachel Lau and J.J. Tolentino work with leading public interest foundations and nonprofits on technology policy issues at Freedman Consulting, LLC. Ben Lennett is the Managing Editor at Tech Policy Press, Gabby Miller is a staff writer at Tech Policy Press, and Prithvi Iyer is a Program Manager at Tech Policy Press.
In August, US tech policy saw significant developments, including significant court rulings and legislative progress on regulating artificial intelligence at the state level. Here are some highlights:
- A district court judge found that Google violated federal antitrust law by acting to maintain a monopoly in online search. Federal appeals courts also issued two important decisions that may affect efforts to regulate social media and hold them legally liable.
- The state legislature in California passed a number of AI governance and transparency laws ahead of the session’s last day before the election. Google also agreed to support a public/private partnership to provide funding for journalism in California.
- With both political conventions now concluded, the differences in the candidates’ plans for tech policy become clearer.
Read on to learn more about August developments in US tech policy.
Google loses major antitrust case
Summary
On August 5, United States District Court for the District of Columbia Judge Amit Mehta ruled that Google acted illegally to maintain a monopoly in online search. The ruling resulted from a lawsuit by the Department of Justice, joined by 50 state attorneys general, alleging that Google used anticompetitive tactics to maintain and extend its monopolies in search services and advertising in violation of the Sherman Antitrust Act. The judge ruled that “Google is a monopolist” in search and “has acted as one to maintain its monopoly” in violation of Section 2 of the Sherman Act. In particular, Judge Mehta focused on Google’s agreements to pay mobile carriers, Android phone manufacturers, and browser operators, including Apple and Mozilla, tens of billions of dollars over the years to be the default search provider. As the decision argued, “Google has obtained a largely unseen advantage over its rivals: default distribution” that enabled it to quash rivals and maintain a monopoly in search.
Google is likely to appeal the decision, but in the meantime, the judge will decide on appropriate remedies to address the company’s violations of the Sherman Act. Such remedies could include prohibiting the company from engaging in any exclusionary contracts that pay companies like Apple to make it the default search engine, as well as requiring Google to implement a “choice screen” on Android devices that enables users to choose another default search engine. The government is also seeking to break up the company by separating Google search from the other parts of Alphabet, including Android and Google’s ad platforms. Other options include interoperability and data sharing requirements on Google search that would lower barriers to entry for new competitors.
Stakeholder Response
Reactions to the ruling in the US v. Google antitrust case were mostly supportive of Judge Amit Mehta's verdict, including bipartisan support from some political leaders, with opposition coming from industry. Public Knowledge strongly supported the ruling, viewing it as a pivotal moment in addressing Google's anti-competitive practices. Public Knowledge argued that this decision will foster a more equitable digital marketplace. The Open Markets Institute also hailed the ruling as a historic win for antitrust enforcers and internet users, seeing it as a crucial step in holding Google accountable for years of antitrust violations.
Colorado Attorney General Phil Weiser, among the plaintiffs in the case, was pleased “that Judge Mehta concluded that Google has abused its monopoly power and harmed consumers in the internet search marketplace.” Similarly, Texas Attorney General Ken Paxton celebrated the decision as a major win against Big Tech. He emphasized the importance of curbing anticompetitive practices and viewed the ruling as a step towards addressing Google's monopolistic behavior in internet search engines and advertisements.
Kent Walker, Google’s President of Global Affairs, said that the company would appeal the ruling. He said, “This decision recognizes that Google offers the best search engine, but concludes that we shouldn’t be allowed to make it easily available.” Tech trade association NetChoice argued the ruling could harm consumers by fracturing the market and stifling innovation. NetChoice further contends that the decision penalizes the company’s success and potentially hinders the ability of American companies to compete globally. The Information Technology and Innovation Foundation released a statement calling the decision “a dangerous precedent based on faulty antitrust reasoning that will cast a long shadow over the American technology industry.”
What We’re Reading
- David McCabe, ‘Google Is a Monopolist,’ Judge Rules in Landmark Antitrust Case, New York Times
- Adi Robertson, US v. Google: all the news from the search antitrust showdown, The Verge
- Karin Montoya, "Google Is A Monopolist" And Other Key Points From Judge Mehta's Ruling, Tech Policy Press
- Sumit Sharma, The DOJ’s Google Search Case – What Next?, Tech Policy Press
- Cristina Caffara and Robin Berjon, “Google is a Monopolist” – Wrong and Right Ways to Think About Remedies, Tech Policy Press
California’s state legislature moves forward with a number of AI governance bills
Summary
August brought increased attention to California as major AI safety regulations progressed through the state legislature, serving as a microcosm for many ongoing national debates about AI regulation. This month, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB1047), introduced by State Sen. Scott Wiener (D), made significant headway, passing through the California Assembly and the State Senate. The bill would mandate firms developing advanced AI models to conduct safety tests prior to deployment as well as requiring AI software firms operating in the state to develop methods for turning off AI models if they “go awry.” The bill would also permit California’s attorney general to sue AI companies if their technologies cause significant harm, such as “mass property damage or human casualties.”
While making steady progress through the California legislature, the bill became entangled in national debates about AI regulation. To appease mounting pressure from industry leaders including Anthropic, Amazon, and Alphabet, State Sen. Wiener agreed to industry-friendly amendments such as removing the ability for the state’s attorney general to sue AI companies for “negligent safety practices before a catastrophic event occurred” and loosening requirements on AI labs to submit public “statements” outlining their safety practices, among other additional revisions. The bill now goes before going to Governor Gavin Newsom’s desk.
In addition, the state legislature passed numerous other AI-related bills, including AB2013, which requires companies to make specific disclosures “regarding the data used to train the generative artificial intelligence system or service.” Another bill, AB2905, would require companies to inform callers if a “prerecorded message uses an artificial voice,” and AB2355 would require a committee to disclose when a political advertisement was “generated or substantially altered using artificial intelligence.” Finally, AB2655 would require a large online platform “to block the posting of materially deceptive content related to elections in California, during specified periods before and after an election.” The governor has until September 30th to sign or veto these and several other AI-related bills.
Stakeholder Response
While stakeholder responses to the bill have been split, opponents have often pointed to the bill’s potential impact on the AI innovation ecosystem and technological progress. Rep. Zoe Lofgren (D-CA), the ranking member of the House Science Committee and Silicon Valley representative, issued a letter highlighting concerns about SB1047, suggesting that the bill would fall short of guarding against AI’s “demonstrable risks to public safety,” while creating unnecessary risks and burden for the public and AI innovation ecosystem. A group of eight US Congress members representing California submitted a letter to Governor Newsom urging him to veto the bill. Rep. Nancy Pelosi (D-CA) released a statement arguing that the bill was “well-intentioned but ill informed.” Dr. Fei-Fei Li, inaugural Sequoia Professor in the Computer Science Department at Stanford University, Co-Director of Stanford’s Human-Centered AI Institute, and often credited as the “Godmother of AI,” wrote in an op-ed that while SB1047 is well-intentioned, it would stifle the budding AI ecosystem leading to significant unintended consequences for the public sector, academia, and “little tech.” A group of technology advocacy organizations, including the Chamber of Progress, R Street Institute, NetChoice, and TechFreedom, wrote a letter to Governor Newsom urging him to veto the SB1047.
In contrast, supporters of SB1047 have reinforced that the bill would work to prevent existential AI threats and establish necessary guardrails on companies at the cutting edge of AI innovation. Following the implementation of industry-friendly amendments, Anthropic endorsed the bill, stating that the bill had improved “to the point where we believe its benefits likely outweigh its costs.” Prominent AI researchers Yoshua Bengio and Geoffrey Hinton, often dubbed the “godfathers of AI,” have come out in support of SB1047, with Bengio suggesting that the bill would be a “positive and reasonable step towards advancing both safety and long-term innovation in the AI ecosystem.” Elon Musk also voiced his support for the California AI bill on X, while re-emphasizing his stance on AI regulation.
What We’re Reading
- Alan Kyle, Tracking California’s Proposed AI Legislation, Tech Policy Press
- Rob Eleveld and Jai Jaisimha, California SB 1047: Watchdog or Lapdog?, Tech Policy Press
- Ben Brooks, California’s AI Reforms Scare All Developers, Not Just Big Tech, Tech Policy Press
- Rogé Karma, We’re Entering an AI Price-Fixing Dystopia, The Atlantic
Tech policy in the election
Summary
The Democratic Party released its official 2024 platform in advance of the DNC’s Convention in Chicago last week. The 92-page document included achievements made under the Biden-Harris administration, including President Joe Biden’s Executive Order on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” While not a major priority of the Democratic Party’s platform, the document still references a number of other tech policies, most of which expanded on the Biden-Harris administration’s existing work. The most significant ones focus primarily on protecting children online and strengthening data privacy protections, which the document said fall under Democrats’ “unity” agenda. The party’s vision also aimed to balance the “promise and peril” that artificial intelligence holds as a future Democratic administration “acts fast” to “ensure that AI serves the public interest.”
In contrast, the Republican Party has offered relatively less insight into its tech policy agenda under a second Trump term in its official platform, with just a few paragraphs dedicated to the issue in the document published in early July. It mostly outlined a GOP approach to AI regulation, including a promise to repeal Biden’s AI executive order, and how it planned to fight back against the “un-American crypto crackdown.” The GOP also planned to use “advanced technologies” to monitor and secure the border as well as modernize the military.
Both parties took a tough stance on China, suggesting that some aspects of US tech policy may be more or less the same, no matter which party has more say after November. The Democrats referred to China as the country’s most important “strategic competitor,” while the Republicans promised “strategic independence” and more significant reductions in trade with Chinese companies.
Stakeholder Response
Overall, the Democratic Party platform was neither warmly embraced nor harshly criticized for its tech policy vision – likely because it was scant on details. The Tech Oversight Project Executive Director and Democratic campaign strategist, Sacha Haworth, told the Verge that “after years of tech companies manipulating our economy, endangering people seeking reproductive care, and making the climate crisis worse, it’s clear that Democrats are now committed to holding them accountable.” Haworth added that the advocacy organization looks forward to continuing the work “to stand up to tech monopolies” with a future Harris-Walz Administration. The Senior Director of the Center on Cyber and Technology Innovation, Mark Montgomery, told CyberScoop that he was “disappointed” at a near-total absence of cybersecurity mentions in the Democratic document. Montgomery wanted more discussion on protecting critical infrastructure but gave credit to the Biden-Harris administration for the energy they’ve put into the two issues throughout their time in the Whitehouse.
What We’re Reading
- Gabby Miller, US Election: What the Democratic and Republican Parties Say About Tech Policy, Tech Policy Press
- Daisuke Wakabayashi, Stephanie Saul, and Kenneth P. Vogel, How Kamala Harris Forged Close Ties With Big Tech, New York Times
- Mark Scott, Tech policy under Harris or Trump looks pretty similar, Politico
- Ashley Gold and Maria Curi, Where Tim Walz stands on tech policy, Axios
Tech TidBits & Bytes
Tech TidBits & Bytes aims to provide short updates on tech policy happenings across the executive branch and agencies, Congress, civil society, industry, international governance, and courts.
In the executive branch and agencies
- The Federal Communications Commission (FCC) published a request for comments about measures to improve transparency around AI-generated content in political advertising for radio and television. Comments are due on September 4, 2024 and reply comments are due on September 19, 2024. Members of the House Committee on Energy and Commerce sent a letter to the FCC expressing their support for the rulemaking.
- The FCC voted to proceed with a combined Notice of Proposed Rulemaking and Notice of Inquiry on consumer protections against AI robocalls.
- The FCC settled a case on AI robocalls against Lingo Telecom, with the company paying a $1 million fine after it transmitted AI-generated robocalls imitating President Biden ahead of the Democratic primary election in New Hampshire.
- The Republican chair of the Federal Election Commission (FEC) announced that the body will not propose any new rules for AI in political advertising this year.
- The National Institute of Standards and Technology (NIST) published a request for comments on their Digital Identity Guidelines, which will provide the “process and technical requirements for meeting the digital identity management assurance levels specified in each volume” of the guidelines.
- The Office of Management and Budget (OMB) released a “Readout of White House Roundtable on the Responsible Acquisition of Artificial Intelligence” on a convening of industry leaders with the White House to discuss the federal government’s acquisition of AI.
- The Government Accountability Office (GAO) wrote a letter to the Office of Science and Technology Policy (OSTP) urging OSTP to implement their recommendations on “managing climate change risks, addressing national goals through strategic planning and improved data, regulating artificial intelligence, [and] addressing research security risks.”
In civil society
- Consumer Reports sent a letter to the Federal Trade Commission (FTC) urging the commission to investigate the privacy practices of major auto manufacturers following allegations that several auto manufacturers wrongly collected and shared consumer data.
- AIPI released polling conducted on August 11 of 1,080 respondents on government action on AI, finding that 59 percent of respondents prefer safety mandates over a ban on more sophisticated AI technologies. 53 percent of respondents thought that Kamala Harris, if elected president, should prioritize minimizing the risks of AI.
- AI Now published a report outlining how the Food and Drug Administration (FDA) can be seen as an example of how to approach AI regulation.
In Congress
- Sen. Mike Rounds (R-SD) unveiled a set of five AI bills for the Senate to consider in early September when the chamber resumes business. Included in the legislation are bills to address AI in the financial services industry, establishing a national AI literacy strategy, requiring the Defense Secretary to “use AI in a pilot program to optimize the Pentagon’s operational logistics,” and the creation of a centralized biomedical data exchange.
In industry
- Google Deepmind published research on the misuse of generative AI in partnership with Jigsaw and Google.org, analyzing almost 200 media reports on public misuse to define common strategies for generative AI misuse.
- Google joined a partnership with the State of California, news publishers, and philanthropy to provide ongoing financial support to newsrooms across California and launch a National AI Accelerator. Google has committed to giving $110 million over five years to support local newsrooms in the state, with another $70 million coming from the state government. The partnership comes in lieu of a bill AB886, which would have required online platforms like Google to negotiate payments to digital journalism providers for the use of their content.
- OpenAI announced that the US Agency for International Development (USAID) will be the first federal agency to use their products, with the goal of using AI to reduce administrative needs and facilitate partnerships with local organizations.
- OpenAI, Adobe, and Microsoft voiced support for California's AB3211, a bill that would require large online platforms to watermark AI-generated content. The bill is heading for a final vote in the California Assembly in August.
- Google’s Threat Analysis Group (TAG) published a report examining recent surveillance attacks on Mongolian government websites. In addition to providing details about the attacks, TAG offered recommendations for how to mitigate risks of potential infection.
In the courts
- The US Court of Appeals for the Ninth Circuit partially upheld a lower-court ruling for NetChoice v. Bonta, keeping the preliminary injunction against California’s Age Appropriate Design Code in place. The panel of judges ruled that “certain provisions of the law are likely to violate the First Amendment by compelling online businesses to assess and mitigate potential harms to children.”
- The US Court of Appeals for the Third Circuit reversed a district court’s decision that dismissed a lawsuit against TikTok for its role in the death of a young girl who was recommended videos related to the so-called blackout challenge. The district court had ruled that TikTok was not liable, given the immunization for social media platforms provided under Section 230. The Third Circuit court based its decision in part on the recent Supreme Court NetChoice decision, arguing the Supreme Court recognized “that platforms engage in protected first-party speech under the First Amendment when they curate compilations of others’ content via their expressive algorithms” and thus those algorithms are not immunized by Section 230 which protect platforms from liability associated with the content posted by its users.
- The US Court of Appeals for the Fifth Circuit Court paused enforcement of its July 2024 decision in Consumers’ Research v. FCC pending an appeal from the Federal Communication Commission. The Fifth Circuit ruled in July that the current funding mechanism for the Universal Service Fund (USF), established under the Telecommunications Act of 1996, is unconstitutional.
Other Legislation Updates
The following bills were introduced across the House and Senate in August:
- Artificial Intelligence Acquisitions Act - S.4976. Introduced by Sens. Marco Rubio (R-FL), Rick Scott (R-FL), and John Barrasso (R-WY), the bill would “prohibit the US government, and private entities it contracts with, from procuring or using adversarial AI.” More specifically, it would task “the “Under Secretary of Commerce for Standards and Technology, in coordination with the Federal Acquisitions Security Council, to compile and update a list of AI products and services from countries of concern, including China, Russia, Iran, North Korea, Cuba, Venezuela, and Syria,” and allow “companies contracting with the US two years to divest from products on the list.”
- National Science Foundation Artificial Intelligence Education Act of 2024 - HR.9402. Introduced by Reps. Vince Fong (R-CA) and Andrea Salinas (D-OR), the bill would “support the National Science Foundation (NSF) in educational and professional development relating to artificial intelligence.” More specifically, it would allow the NSF to award AI scholarships, especially when applied to agriculture, education, and advanced manufacturing, and establish up to eight “Centers of AI Excellence” at community colleges.
- Expanding AI Voices Act of 2024 - H.R.9403. Introduced by Reps. Valerie Foushee (D-NC) and Frank Lucas (R-OK), the bill would facilitate capacity building, promote increased access, and broaden participation in populations historically underrepresented in STEM for artificial intelligence research, education, and workforce development.
- The Unleashing AI Innovation in Financial Services Act - S.4951/HR.9309. Introduced by Sens. Mike Rounds (R-SD) and Martin Heinrich (D-NM) and Reps. French Hill (R-AR) and Ritchie Torres (D-NY), the bill would “provide regulatory sandboxes that permit certain persons to experiment with artificial intelligence without expectation of enforcement actions.”
- The Consumers Learn AI Act - S.#. Introduced by Sens. Mark Kelly (D-AZ) and Mike Rounds (D-SD), this bill would direct the Secretary of Commerce to develop “a national literacy strategy, providing specific AI use case guidance and conduct a national media campaign to help consumers make informed decisions about how they use and interact with AI.”
- The GUIDE AI Act - S.4638. Introduced by Sen. Mike Rounds (R-SD), the Act would establish “a centralized data exchange center for biomedical data through the National Institutes of Health (NIH), the National Library of Medicine (NLM), and the National Artificial Intelligence Research Resource (NAIRR).” Sen. Rounds proposed the Act as an amendment to the Fiscal Year 2025 Defense Appropriations Act.
- The Increasing AI Transparency in Financial Services Act - S.4638. Introduced by Sen. Mike Rounds (R-SD), the bill would require “reports on AI regulation in the financial services industry.” This Act is also a proposed amendment to the Fiscal Year 2025 Defense Appropriations Act.
- To require the Secretary of Defense… - S.4758. Introduced by Sen. Mike Rounds (R-SD), the bill requires the Secretary of Defense to carry out a pilot program on using artificial intelligence-enabled software to optimize the workflow and operations of depots, shipyards, and other manufacturing facilities run by the Department of Defense, and for other purposes.
- Digital Integrity in Democracy Act - S.4977. Introduced by Sens. Peter Welch (D-VT), Amy Klobuchar (D-MN), Jeff Merkley (D-OR), Ben Ray Lujan (D-NM), Michael Bennett (D-CO), and Mazie Hirono (D-HI), the bill would “hold accountable operators of social media platforms that intentionally or knowingly host false election administration information.”
- The Department of Energy Quantum Leadership Act - S.4932. Introduced by Sens. Dick Durbin (D-IL) and Steve Daines (R-MT), the bill would amend the National Quantum Initiative Act to provide for a research, development, and demonstration program through the U.S. Department of Energy (DOE).
- Public and Private Sector Ransomware Response Coordination Act - H.R.9315. Introduced by Reps. Zach Nunn (R-IA) and Josh Gottheimer (D-NJ), the bill would “direct the Secretary of the Treasury to submit a report on coordination in the public and private sectors in responding to ransomware attacks on financial institutions.”
- Cyber Ready Workforce Act - H.R.9270. Introduced by Reps. Susie Lee (D-NV) and Brian Fitzpatrick (R-PA), the bill would “direct the US Department of Labor to award grants to increase access to registered apprenticeship programs in cybersecurity.” The Act is companion legislation to a bill introduced in the Senate last month.
We welcome feedback on how this roundup could be most helpful in your work – please contact contributions@techpolicy.press with your thoughts.