Home

Breaking Open Tech’s Black Box

Yaël Eisenstat / Jun 16, 2023

Yaёl Eisenstat is a Vice President at the Anti-Defamation League (ADL), where she heads the Center for Technology and Society.

As the urgency increases to rein in “big tech,” more attention is turning to the need for transparency as a starting point. For years, I have voiced serious concerns about the lack of accountability in the social media industry and called for transparency as a necessary first step to breaking open the “black box.” We need federal transparency legislation not just for transparency’s sake, but as a key ingredient to solving some of the very real problems of our current online ecosystem.

Today, mainstream digital platforms hold immense power over every aspect of our lives: they influence the news we consume, the information we pursue, the connections we make, the narratives we believe, and even the reach of our own voices. This power is not only concentrated within a handful of dominant companies but also rests in the hands of select decision-makers within those companies.

No other industry operates under as much secrecy and autonomy as “big tech.” The lack of transparency surrounding tech company operations is truly unprecedented, and frankly unacceptable. Platforms working inside a “black box” ultimately leave users in the dark about the processes and business decisions that directly impact their lives.

For far too long, social media companies have operated largely unchecked, implementing monetization policies, prioritizing certain political and influential users, and leveraging AI-powered systems without any real accountability to the public. We know many of these practices have caused significant, lasting harm. At ADL, our Center for Technology and Society’s annual survey of online hate and harassment has consistently shown that the majority of identity-based online harassment—including severe harassment like violent threats, cyberstalking and doxing—happens on mainstream social media platforms. And on top of this, platforms’ use of algorithms, recommender systems, and targeting tools can cause harm by amplifying and normalizing extremist and conspiracy narratives.

For years, we only knew of these harms thanks to journalists, researchers, and whistleblowers. When I left Facebook, where in 2018 I was the Global Head of Elections Integrity, I spoke of the dangerous lack of transparency in political advertising and detailed how Facebook leadership refused to take the necessary steps to protect the U.S. midterm election against voter suppression tactics. At the time, it was my word against theirs.

That is no longer the case. Troves of documents released by whistleblower Frances Haugen have exposed how Facebook algorithms prioritize divisive and violent content and push users toward conspiracy theories such as QAnon, how the company failed to stop the spread of false election narratives like “Stop the Steal”, and how Facebook employees knew their own “core product mechanics” contributed to the proliferation of hate speech and misinformation. All this after Facebook’s Chief Operating Officer at the time, Sheryl Sandberg, claimed that the insurrection was planned on other platforms and that Facebook had taken down “Stop the Steal” and QAnon groups.

Despite those revelations, tech companies continue to hide behind the veils of “self-reporting” and “self-regulation.” As a result, the public and policymakers have struggled to truly understand the business and design decisions that lead to so many of these harms.

If “big tech” is indeed a black box, what does it mean to actually break it open?

The first step in crafting smart, effective legislation is having access to the information necessary to truly evaluate these harms and what role the companies’ own tools, design practices and business decisions might play. Federal lawmakers must pass legislation requiring platforms to disclose information about how their practices, policies, and products impact users. Effective transparency legislation should be comprehensive, requiring disclosures about content policies and enforcement as well as advertising systems. A critical component is providing researchers and third-party auditors with access to data. Finally, with the rapid adoption of generative AI, lawmakers must also consider what information we need to know about these complex AI systems to prevent harm.

California's AB 587, which was championed by ADL and enacted into law in September 2022, serves as one example of legislation that strikes a balance between respecting First Amendment principles and providing essential information to the public about tech companies’ content policies. Rather than mandating specific rules for platforms, the law requires that platforms disclose their existing rules – including comprehensive information about the content moderation policies in place, changes to those policies, and exceptions to rules. It also requires tech companies to disclose anonymized data about removed, de-amplified, and demonetized content. Importantly, to better understand how platforms learn about harmful content, the legislation requires sharing whether content flags involved human moderators, civil society partners, artificial intelligence, or a combination thereof.

The Platform Accountability and Transparency Act (PATA), which was recently reintroduced in the U.S. Senate by a bipartisan coalition that includes Senators Chris Coons (D-DE) Bill Cassidy (R-LA), Amy Klobuchar (D-MN), John Cornyn (R-TX), Richard Blumenthal (D-CT), and Mitt Romney (R-UT), is a considerable step forward. If passed, PATA would require tech companies to provide increased access to data for researchers and compel platforms to disclose information about high-visibility content, ad libraries, recommendation engines, and content moderation systems. The legislation is a good start. To be strengthened it should address the use of generative AI, include specifics about actioned content, and add information about how violations will be reported. This is the closest Congress has come to meaningful action on transparency, and it deserves equally serious consideration by the House.

In the case of systems that use generative AI, lawmakers should consider requiring increased access to information about training data leveraged by the large language models (LLMs) that underpin complex AI systems. Additionally, they should consider requiring generative AI developers to improve explainability, the capacity to express why an AI system reached a particular decision. This is crucial for users, policymakers, and advocates to comprehend and assess the magnitude of AI's potential pitfalls.

Tech companieshave the power to create safer online spaces for users. They can bolster researchers’ ability to produce evidence-based insights and support lawmakers in their efforts to create effective and informed policies. However, if the past is precedent, tech companies will not engage in these efforts voluntarily. They’ll argue that it’s too complicated, too expensive, or too risky to disclose information about their operations. Of course, transparency proposals must consider the size of a tech company when considering the resources required to comply with new laws. But for companies making hundreds of millions, even billions of dollars in revenue, the time has passed for excuses.

We need accurate, consistent, and comprehensive information to understand and navigate our online ecosystem. Every day Congress waits to pass federal transparency legislation is a day we don’t have the very information we need to safeguard children, protect vulnerable communities, and uphold our democracy. The time is now to break open the black box.

Authors

Yaël Eisenstat
Yaёl Eisenstat is a Senior Fellow at Cybersecurity for Democracy, working on policy solutions for how social media, AI-powered algorithms, and Generative AI affect political discourse, polarization, and democracy. Previously, she was a Vice President at the Anti-Defamation League (ADL), heading the ...

Topics