Home

Opening testimonies- Algorithms and Amplification: How Social Media Platforms’ Design Choices Shape Our Discourse and Our Minds

Justin Hendrix / Apr 27, 2021

On Tuesday, April 27th, 2021, the Subcommittee on Privacy, Technology & the Law in the U.S. Senate Judiciary Committee hosted a hearing on Algorithms and Amplification: How Social Media Platforms’ Design Choices Shape Our Discourse and Our Minds. Below are the written testimonies from the participants in the hearing, including:

  • Monika Bickert: Vice President for Content Policy, Facebook
  • Lauren Culbertson: Head of U.S. Public Policy, Twitter, Inc.
  • Alexandra N. Veitch: Director, YouTube Government Affairs & Public Policy, Americas
  • Joan Donovan, Ph.D.: Research Director at Harvard Kennedy School's Shorenstein Center on Media, Politics and Public Policy
  • Tristan Harris: President and Co-Founder, Center for Humane Technology

Testimony of Monika Bickert: Vice President for Content Policy, Facebook

I. Introduction

Chairman Coons, Ranking Member Sasse, and distinguished members of the Subcommittee, thank you for the opportunity to appear before you today. My name is Monika Bickert, and I am the Vice President of Content Policy at Facebook. Prior to assuming my current role, I served as lead security counsel for Facebook, working on issues ranging from children’s safety to cybersecurity. And before that, I was a criminal prosecutor with the Department of Justice for eleven years in Chicago and Washington, DC, where I prosecuted federal crimes, including public corruption and gang violence.

Facebook is a community of more than two billion people, spanning countries, cultures, and languages across the globe. Every day, members of our community use Facebook to connect and share with the people they care about. These personal interactions are at the core of our mission to give people the power to build community and bring the world closer together.

It is important to us that people find content that is meaningful to them on our platform, and our algorithms help them do just that. We also understand that people have questions about how these algorithmic systems work. I look forward to discussing today the ways in which Facebook is already working to provide greater transparency and the additional steps we are taking to put people even more firmly in charge of the content they see.

II. Algorithmic Ranking on Facebook

Facebook uses algorithms for many of our product features, including to enable our search function and to help enforce our policies. But when people refer to Facebook’s “algorithm,” they are often talking about the content ranking algorithms that we use to order a person’s News Feed.

The average person has thousands of posts in her News Feed each day. This includesthe photos, videos, and articles shared by the friends and family she chooses to connect to on the platform, the Pages she chooses to follow, and the Groups she chooses to join. Most people don’t have time to look at all of this content every day, so we use a process called ranking to sort this content and put the things we think you will find most meaningful closest to the top of your News Feed. This ranking process is personalized and is driven by your choices and actions.

To make sure you don’t miss meaningful content from your friends and family, our systems consider thousands of signals, including, for example, who posted the content; when it was posted; whether it’s a photo, video, or link; and how popular it is on the platform. The algorithms use these signals to predict how likely content is to be relevant and meaningful to you: for example, how likely you might be to like it or find that viewing it was worth your time. The goal is to make sure you see what you find most meaningful—not to keep you on the service for a particular length of time.

Notably, in 2018, we changed the way we approached News Feed rankings to focus not only on serving people the most relevant content, but also on helping them have more meaningful social interactions—primarily by doing more to prioritize content from friends, family, and Groups they are part of. We recognized that this shift would lead to people spending less time on Facebook, because Pages—where media entities, sports teams, politicians, and celebrities, among others, tend to have a presence—generally post more engaging (though less personally meaningful) content than a user’s personal friends or family. The prediction proved correct; the change led to a decrease of 50 million hours’ worth of time spent on Facebook per day, and we saw a loss of billions of dollars in the company’s market cap. But we view this change as a success because it improved the experience of our users, and we think building good experiences is good for the business in the long term.

III. Increasing Transparency and Control

This sifting and ranking process results in a News Feed that is unique to each person. Naturally, users don’t see the computer code that makes up the algorithm, but we do share information about how the ranking process works, including publishing blog posts that explain the ranking process and announce any significant changes.

Of course, not everyone is going to read our blogs about how the systems work, so we’re also doing more to communicate directly to people in our products. For some time, people on Facebook have been able to click “Why Am I Seeing This?” on any ad they see to learn why that ad was placed in their News Feed, and they’re also able to change their advertising preferences. This real-time transparency and control approach has helped improve the Facebook experience for many people. Starting in 2019, we launched a similar “Why Am I Seeing This?” tool to help people understand why a particular post showed up where it did in their News Feed. To access it, people simply needs to click on the post itself, then click on “Why am I seeing this post?,” and they will see information about why that post appears where it does. This tool also provides easy access to their News Feed Preferences, so they can adjust the composition of their News Feed.

We have increased the control that people have over their News Feed so that they know they are firmly in charge of their experience. For instance, we recently launched a suite of product changes to help people more easily identify and engage with the friends and Pages they care most about. And we’re placing a new emphasis not just on creating such tools, but on ensuring that they’re easy to find and to use.

A new product called Favorites, which improves on our previous See First control, allows people to select manually the friends and Pages that are the most meaningful to them. Posts from people or Pages that the user selects will then be shown higher in that user’s News Feed and marked with a star. A person can even choose to see a feed of only the content that comes from those Favorite sources in a new version of News Feed called the Favorites feed.

Facebook users can also choose to reject the personalized ranking algorithm altogether and instead view their News Feed chronologically, meaning that their News Feed simply shows them the most recent posts from their eligible sources of content in reverse chronological order.

So that people can seamlessly transition among standard News Feed, Favorites feed, and the chronological Most Recent feed, Facebook now provides a bar on the site where users can select which version of News Feed they want to see.

As we work to enhance transparency and control, we’re also continuously improving the way our ranking systems work so that people see what’s most meaningful to them. Just last week, we announced that we are expanding our work to survey people about what’s most meaningful to them and worth their time. These efforts include new approaches to take into account whether people find a post inspirational, whether they are interested in seeing content on a particular topic, or whether certain content leaves people feeling negative. We are also making it easier to give feedback directly on an individual post. We believe that continuing to invest in new ways to learn more about what people want (and don’t want) to see in News Feed will help improve the ranking process and the user experience. We’ll continue to incorporate this feedback into our News Feed ranking process in the hopes that Facebook can leave people feeling more inspired, connected, and informed.

IV. Working to Combat Harmful Content and Misinformation

Of course, News Feed ranking isn’t the only factor that goes into what a person might see on Facebook. There are certain types of content we simply don’t allow on our services. Our content policies, which we call our Community Standards, have been developed over many years with ongoing input from experts and researchers all over the world. We work hard to enforce those standards to help keep our community safe and secure, and we employ both technology and human review teams to do so. We publish quarterly reports on our work, and we’ve made significant progress identifying and removing content that violates our standards.

We recognize that not everyone agrees with every line in our Community Standards. In fact, there is no perfect way to draw the lines on what is acceptable speech; people simply do not agree on what is appropriate for discourse. We also recognize that many people think private companies shouldn’t be making so many big decisions about what content is acceptable. We agree that it would be better if these decisions were made according to frameworks agreed to by democratically accountable lawmakers. But in the absence of such laws, there are decisions that need to be made in real time.

Last year, Facebook established the Oversight Board to make an independent, final call on some of these difficult decisions. It is an external body of experts, and its decisions are binding—they can’t be overruled by Mark Zuckerberg or anyone else at Facebook. Indeed, the Board has already overturned a number of Facebook’s decisions, and we have adhered to the Board’s determinations. The Board itself is made up of experts and civic leaders from around the world with a wide range of backgrounds and perspectives, and they began issuing decisions and recommendations earlier this year.

If content is removed for violating our Community Standards, it does not appear in News Feed at all. Separately, there are types of content that might not violate Facebook’s Community Standards and are unlikely to contribute to a risk of actual harm but are still unwelcome to users, and so the ranking process reduces their distribution. For example, our algorithms actively reduce the distribution of things like clickbait (headlines that are misleading or exaggerated), highly sensational health claims (like those promoting “miracle cures”), and engagement bait (posts that explicitly seek to get users to engage with them). Facebook also reduces distribution for posts deemed false by one of the more than 80 independent fact-checking organizations that evaluate the accuracy of content on Facebook and Instagram. So overall, how likely a post is to be relevant and meaningful to you acts as a positive in the ranking process, and indicators that the post may be unwelcome (although non-violating) act as a negative. The posts with the highest scores after that are placed closest to the top of your Feed.

Facebook’s approach goes beyond addressing sensational and misleading content post by post. When Pages and Groups repeatedly post misinformation, Facebook reduces their overall distribution. If Groups or Pages repeatedly violate our Community Standards, we restrict or remove them.

The reality is that it’s not in Facebook’s interest—financially or reputationally—to push users towards increasingly extreme content. The company’s long-term growth will be best served if people continue to use and value its products for years to come. If we prioritized trying to keep a person online for a few extra minutes, but in doing so made that person unhappy or angry and less likely to return in the future, it would be self-defeating. Furthermore, the vast majority of Facebook’s revenue comes from advertising. Advertisers don’t want their brands and products displayed next to extreme or hateful content—they’ve always been very clear about that. Even though troubling content is a very small proportion of the total content people see on our services (hate speech is viewed 7 or 8 times for every 10,000 views of content on Facebook), Facebook’s long-term financial self-interest is to continue to reduce it so that advertisers and users have a good experience and continue to use our services.

V. Conclusion

Facebook’s algorithms are a key part of how we help people connect and share, and how we fight harmful content and misinformation on our platform. We will continue to do more to help people understand how our systems work and how they can control them. This is an area where we are investing heavily, and we are committed to continuing to improve.

Thank you, and I look forward to your questions.

Testimony of Lauren Culbertson: Head of U.S. Public Policy, Twitter, Inc.

Chairman Coons, Ranking Member Sasse, and Members of the Subcommittee:

Thank you for the opportunity to appear before you today to provide testimony on behalf of Twitter at today’s hearing, “Algorithms and Amplification: How Social Media Platforms’ Design Choices Shape Our Discourse and Our Minds.”

Twitter’s purpose is to serve the public conversation. While in 2006, this meant providing a platform for people to share 140-character status updates, our service has evolved to become the go-to place for people to see what’s happening in the world, share opinions and observations, and engage in conversations on topics as diverse as sports, popular culture, and politics.

While technology has changed significantly since we were founded 15 years ago, our mission has not. We remain committed to giving people the power to create and share ideas and information instantly with the world.

Many of the questions we grapple with today are not new, but the rise and evolution of the online world have magnified the scale and scope of these challenges. As a global company that values free expression, we find ourselves navigating these issues amidst increasing threats to free speech from governments around the world. We strive to give people a voice while respecting applicable law and staying true to our core principles.

We use technology every day in our efforts to automatically improve outcomes and experiences for people on Twitter. We do that, in part, through algorithms. For example, our machine learning tools help identify potentially abusive or harmful content, including content that violates Twitter’s Rules, to human moderators for review. In fact, we now take enforcement action on more than half of the abusive Tweets that violate our rules before they’re even reported. We think this is critical, as we don't think the burden to identify and report such content should be on those who are the subject of abusive content.

As members of Congress and other policymakers debate the future of Internet regulation, they should closely consider the ways technology, algorithms, and machine learning make Twitter a safer place for the public conversation and enhance the global experience with the Internet at large.

We’ve invested significantly in our systems and have made strides to promote healthy conversations. However, we believe that as we look to the future, we need to ensure that all our efforts are centered on trust. Our content moderation efforts or the deployment of machine learning can be successful only if people trust us. That’s why we think it is critical that we focus on being more open and decentralized. That means we must prioritize and build into our business increased transparency, consumer choice, and competition. In my testimony, I will highlight how we are innovating and experimenting in this area through (1) expanded algorithmic choice; (2) the Twitter Responsible Machine Learning initiative; (3) the Birdwatch initiative; and (4) the Bluesky project.

Expanded Algorithmic Choice

At Twitter, we want to provide a useful, relevant experience to all people using our service. With hundreds of millions of Tweets every day on the service, we have invested heavily in building systems that organize content to show individuals the most relevant information for that individual first. With over 192 million people using Twitter each day in dozens of languages and countless cultural contexts, we rely upon machine learning algorithms to help us organize content by relevance.

We believe that people should have meaningful control over key algorithms that affect their experience online. In 2018, we redesigned the home Timeline, the main feature of our service, to allow people to control whether they see a ranked timeline, or a reverse chronological order ranking of the Tweets from accounts or topics they follow. This “sparkle icon” improvement has allowed people using our service to directly experience how algorithms shape what they see and has allowed for greater transparency into the technology we use to rank Tweets. This is a good start. And, we believe this points to an exciting, market-driven approach that provides individuals greater control over the algorithms that affect their experience on our service.

Responsible Machine Learning Initiative

We are committed to gaining and sharing a deeper understanding of the practical implications of our algorithms. Earlier this month, we launched our “Responsible Machine Learning” initiative, a multi-pronged effort designed to research the impact of our machine learning decisions, promote equity, and address potential unintentional harms. Responsible use of technology includes studying the effects that the technology can have over time. Sometimes, a system designed to improve people’s online experiences could begin to behave differently than was intended in the real world. We want to make sure we are studying such developments and using them to build better products.

This initiative is industry-leading and the very first step and investment into a journey of evaluating our algorithms and working through ways we can apply those findings to make Twitter and our entire industry better. We will apply what we learn to our work going forward, and we plan to share our findings and solicit feedback from the public. While we are hopeful about the ways this may improve our service, our overarching goal is increasing transparency and contributing positively to the field of technology ethics at large.

Birdwatch

We’re exploring the power of decentralization to combat misinformation across the board through Birdwatch — a pilot program that allows people who use our service to apply crowdsourced annotations to Tweets that are possibly false or misleading. We know that when it comes to adding context, not everyone trusts tech companies — or any singular institution — to determine what context to add and when. Our hope is that Birdwatch will expand the range of voices involved in tackling misinformation as well as streamline the real-time feedback people already add to Tweets. We are working to ensure that a broad range of voices participate in the Birdwatch pilot so we can build a better product that meets the needs of diverse communities. We hope that engaging the broader community through initiatives like Birdwatch will help mitigate current deficits in trust.

We are committed to making the Birdwatch site as transparent as possible. All data contributed to Birdwatch will be publicly available and downloadable. As we develop algorithms that power Birdwatch — such as reputation and consensus systems — we intend to publish that code publicly in the Birdwatch Guide.

Bluesky

Twitter is funding Bluesky, an independent team of open source architects, engineers, and designers, to develop open and decentralized standards for social media. It is our hope that Bluesky will eventually allow Twitter and other companies to contribute to and access open recommendation algorithms that promote healthy conversation and ultimately provide individuals greater choice. These standards could support innovation, making it easier for startups to address issues like abuse and hate speech at a lower cost. We recognize that this effort is complex, unprecedented, and will take time but we currently plan to provide the necessary exploratory resources to push this project forward.

Conclusion

We appreciate the enormous privilege to host some of the most important conversations happening at any given time — from real-time updates on Supreme Court rulings to information-sharing about COVID-19 vaccine clinical trials. We are proud of the open service we have built and the steps we take each day to ensure a safe venue for diverse voices and vibrant debate. Moving forward, we believe that more open and decentralized systems will increase transparency, provide more consumer control and choice, and increase competition across our industry. Our hope is that such a system will lead to the necessary innovation to meet today’s needs and solve tomorrow’s challenges. Most importantly, it will build trust.

Thank you again for the opportunity to share Twitter’s perspective with the Subcommittee and the public.

Testimony of Alexandra N. Veitch: Director, YouTube Government Affairs & Public Policy, Americas

Chairman Coons, Ranking Member Sasse, and distinguished members of the subcommittee:

Thank you for the opportunity to appear before you today. My name is Alexandra Veitch, and I am the Director of Public Policy for the Americas at YouTube. As part of my role, I lead a team that advises the company on public policy issues around online, user-generated content.

At YouTube, we believe that the Internet has been a force for creativity, learning, and access to information. Supporting this free flow of ideas is at the heart of our mission to give everyone a voice and show them the world. We have built and continue to improve YouTube to empower users to access, create, and share information like never before; this has enabled billions to benefit from a bigger, broader understanding of the world. In addition, our platform has created economic opportunities for small businesses across the country and around the world, and we have provided artists, creators, and journalists a platform to share their work. Over the last three years, we’ve paid more than $30 billion to creators, artists, and media companies around the world1. And according to an Oxford Economics report, YouTube's creative ecosystem supported the equivalent of 345,000 full time jobs in 2019 in the United States.

Over the years, we have seen more and more people come to YouTube to share their experiences and understand their world more deeply. This is especially true when it comes to learning new skills, participating in civic engagement, and developing informed opinions about current events. With so many users around the world looking to YouTube for information, we have a responsibility to provide a quality experience and support an informed citizenry. Over the past several years, responsibility has been a critical area of investment across our company. We have focused extensively on developing policies and building product solutions to live up to this responsibility while preserving the opportunity of an open platform. And we work continuously to identify areas where we can do more. In my testimony today, I will (1) explain how we think about algorithms, (2) discuss our approach to responsibility and how technology supports this work, (3) highlight our efforts to provide more transparency and visibility into how YouTube works, and (4) illustrate how our products protect users.

How YouTube thinks about algorithms

YouTube is a multi-faceted video-sharing platform enjoyed by billions of consumers and creators. The popularity of our service unlocks business opportunities for creators and helps businesses grow their reach. With this volume of users and economic engine for creators comes a significant responsibility to protect those users. With more than 500 hours of video uploaded to YouTube per minute, enabling users to easily find content they are looking for and will enjoy while protecting them from harmful content simply would not be possible without the help of technology.

Because of the importance of algorithms in the YouTube user experience, we welcome the opportunity to clarify our approach to this topic. In computer science terms, an algorithm is a set of instructions that direct a computer to carry out a specific task. An algorithm can be simple—asking a computer to calculate the sum of two numbers—or extremely complex, such as machine learning algorithms that consistently refine their ability to accomplish the goal for which they were programmed. An algorithm can manage a few inputs or nearly limitless inputs, and they can do one thing or perform a number of functions at once. Nearly everything that people do today on their devices is made possible by algorithms.

YouTube uses machine learning techniques to manage and moderate content on YouTube. YouTube’s machine learning systems sort through the massive volume of content to find the most relevant and useful results for a user’s search query, to identify opportunities to elevate authoritative news, and to provide a user with additional context via an information panel if appropriate. We also rely on machine learning technology to help identify patterns in content that may violate our Community Guidelines or videos that may contain borderline content—content that comes close to violating our Community Guideline but doesn’t quite cross the line. These systems scan content on our platform 24/7, enabling us to review hundreds of thousands of hours of video in a fraction of the time it would take a person to do the same. For example, more than 94% of the content we removed between October and December of 2020 was first flagged by our technology. This underscores just how critical machine learning is for content moderation.

Another area where we use machine learning is for recommendations. Recommendations on YouTube help users discover videos they may enjoy, and they help creator content reach new viewers and grow their audience across the platform. We share recommendations on YouTube’s homepage and in the “Up next” section to suggest videos a user may want to watch after they finish their current video. Our recommendation systems take into account many signals, including a user’s YouTube watch and search history (subject to a user’s privacy settings) and channels to which a user has subscribed. We also consider a user’s context—such as country and time of day—which, for example, helps our systems show locally relevant news, consistent with our effort to raise authoritative voices. Our systems also take into account engagement signals about the video itself—for example, whether others who clicked on the same video watched it to completion or clicked away shortly after starting to view the video. It is important to note that, where applicable, these signals are overruled by the other signals relating to our efforts to raise up content from authoritative sources and reduce recommendations of borderline content and harmful misinformation—even if it decreases engagement.

We also empower our users by giving them significant control over personalized recommendations, both in terms of individual videos as well as the way that watch and search history may inform recommendations. Users control what data is used to personalize recommendations by deleting or pausing activity history controls. Signed out users can pause and clear their watch history, while signed in users can also view, pause, and edit watch history at any time through the YouTube history settings. Clearing watch history means that a user will not be recommended videos based on content they previously viewed. Users can also clear their search history, remove individual search entries from search suggestions, or pause search history using the YouTube History settings.

In-product controls enable users to remove recommended content—including videos and channels—from their Home pages and Watch Next. Signed in users can also delete YouTube search and watch history through the Google My Account settings, set parameters to automatically delete activity data in specified time intervals, and stop saving activity data entirely. We also ask users directly about their experiences with videos using surveys that appear on the YouTube homepage and elsewhere throughout the app, and we use this direct feedback to fine-tune and improve our systems for all users.

YouTube’s Pillars of Responsibility: the 4Rs

Responsibility is our number one priority at YouTube. Some speculate that we hesitate to address problematic content because it benefits our business; this is simply false. Failure to consistently take sufficient action to address harmful content not only threatens the safety of our users and creators, it also threatens the safety of our advertising partners’ brands. Our business depends on the trust of our users and our advertisers. This is why we have made significant investments over the past few years in teams and systems that protect YouTube’s users, partners, and business. Our approach towards responsibility involves 4 “Rs” of responsibility, described in detail below.

REMOVE VIOLATIVE CONTENT: Our Community Guidelines provide clear, public-facing, guidance on content that is not allowed on the platform. These include policies against spam, deceptive practices, scams, hate, harassment, and identity misrepresentation and impersonation. We remove content that violates our policies as quickly as possible, and removed videos represent a fraction of a percent of total views on YouTube. We work continuously to shrink this even further through improved detection and enforcement, relying on a combination of technology and people.

We are dedicated to providing access to information and freedom of expression, but YouTube has always had clear and robust content policies. For example, we have never allowed pornography, incitement to violence, or content that would harm children. Harmful content on our platform makes YouTube less open, not more, by creating a space where creators and users may not feel safe to share. That’s why our policy development team systematically reviews and updates all of our policies to ensure that they are current, keep our community safe, and preserve openness. They frequently consult outside experts and YouTube creators during the process, and consider regional differences to ensure proposed changes can be applied fairly and consistently around the world.

Our COVID-19 Medical Misinformation Policy represents one such example of YouTube working closely with experts. Over the course of the last year, we have worked with and relied on information from health authorities from around the world to develop a robust policy anchored in verifiably false claims tied to real world harm. Our policy addresses false and harmful claims about certain treatments and public health measures, as well as misinformation about COVID-19 vaccines. Each claim we prohibit has been vetted as verifiably false by the consensus of global health authorities, including the CDC. This policy has evolved alongside misinformation trends about the pandemic, and we have invested significant time and resources into carefully developing the policy and creating the necessary training and tools required to enforce it.

We adapted this policy over time to address the challenges of this pandemic. We began to remove content for COVID-19 misinformation in March 2020, under provisions of our policy prohibiting Harmful and Dangerous content. But as the pandemic progressed, we developed a fulsome and separate COVID-19 misinformation policy. In October 2020, we further expanded the policy to include vaccine misinformation about COVID vaccines. Since March 2020, we have vigorously enforced our COVID-19 misinformation policy to protect our users, removing 900,000 videos worldwide. And in the fourth quarter of 2020, we removed more than 30,000 videos for violating the vaccine provisions of our COVID-19 misinformation policy.

Once we have implemented a policy, we rely on a combination of people and technology to enforce it. Machine learning plays a critical role in content moderation on YouTube, and we deploy it in two key ways: proactively identify and flag harmful content, and automatically remove content that is very similar to what has been previously removed. In both cases, data inputs are used to train the systems to identify patterns in content—both the rich media content in videos, as well as textual content like metadata and comments—and our systems then use those patterns to make predictions about new examples to match. Machine learning is well-suited to detect patterns, which also helps us to find content similar to other content we have already removed, even before it is ever viewed. We sometimes use hashes (or “digital fingerprints”) to catch copies of known violative content before they are even made available to view. The systems then automatically remove content only where there is high confidence of policy violation—e.g., spam—and flag the rest for human review.

Machine learning is critical to keeping our users safe. In the fourth quarter of 2020, of the 9.3 million videos removed for violating our Community Guidelines, 94% of those videos were first flagged by machine detection. But the human review piece here is critical as well: machines are effective for scale and volume but are not able to analyze and evaluate context, whereas human reviewers allow us to evaluate context and consider nuance when enforcing our policies. Once our machine learning systems flag a potentially violative video, reviewers then remove videos that are violative while non-violative videos remain live. These decisions are in turn used as inputs to improve the accuracy of our technology so that we are constantly updating and improving the system’s ability to identify potentially violative content. In addition, when we introduce a new policy or alter an existing one, it takes our systems a bit of time to catch up and begin to detect relevant content. As we explained when we updated our hate speech policy, our enforcement of new policies improves quarter over quarter.

But as with any system, particularly operating at scale like we do, we sometimes make mistakes, which is why creators can appeal removal decisions. Creators are notified when their video is removed, and we provide a link with simple steps to appeal the decision. If a creator chooses to submit an appeal, it goes to human review, and the decision is either upheld or reversed. And we are transparent about our appeals process. As reported in our most recent Transparency Report, in Q4 2020, creators appealed a total of just over 223,000 videos. Of those, more than 83,000 were reinstated.

We also recently added a new metric to the YouTube Community Guidelines Enforcement report known as Violative View Rate (VVR). This metric is an estimate of the proportion of video views that violate our Community Guidelines in a given quarter (excluding spam). Our data science teams have spent more than two years refining this metric, which we consider to be our North Star in measuring the effectiveness of our efforts to fight and reduce abuse on YouTube. In Q4 of 2020, YouTube’s VVR was 0.16-0.18%, meaning that out of every 10,000 views on YouTube, 16-18 come from violative content. We have added historical data for this metric to our Transparency Report, showing that, since Q4 of 2017, we have seen a 70% drop in VVR. This reduction is due in large part to our investments in machine learning to identify potentially violative content. Going forward, we will update this metric quarterly alongside our regular data updates.

RAISE UP AUTHORITATIVE VOICES: YouTube is a source for news and information for people around the world—whether about events unfolding in local communities or more existential global issues like climate change. Not all queries are the same. For topics like music or entertainment, relevance, newness, and popularity are most helpful to understand what people are interested in. But for subjects such as news, science, and historical events, where accuracy and authoritativeness are key, the quality of information and context are paramount.

Our search and recommendations systems are designed to raise up authoritative voices in response to user queries, especially those that are “news-y” or related to topics prone to misinformation. In 2017, we started to prioritize authoritative voices such as local and national news sources for information queries in search results and “watch next” recommendation panels. This work continued with the addition of a short preview of text-based news articles in search results on YouTube, along with a reminder that breaking and developing news can rapidly change. And in 2018, we introduced Top News and Breaking News sections to highlight quality journalism. Our work here is far from done, but we have seen significant progress in our efforts to raise authoritative voices on YouTube. Globally, authoritative news watchtime grew by more than 85% from the first half of 2019 to the first half of 2020, with a 75% increase in watchtime of news in the first 3 months of 2020 alone. And during the 2020 U.S. elections, the most popular videos about the election came from authoritative news organizations. On average 88% of the videos in top 10 search results related to elections came from authoritative news sources.

Authoritativeness is also important for topics prone to misinformation, such as videos about COVID vaccines. In these cases, we aim to surface videos from authoritative news publishers, public health authorities, and medical experts. Millions of search queries are getting this treatment today and we continue to expand to more topics and countries. In addition, in April 2020, we expanded our fact-checking panels in YouTube search results to the U.S., providing fresh context in situations where a news cycle has faded but where unfounded claims and uncertainty about facts are common. These panels highlight relevant, third-party fact-checked articles above search results for relevant queries, so that our viewers can make their own informed decision about claims made in the news.

We also recognize that there may be occasions when it is helpful to provide viewers with additional context about the content they are watching. To that end, we have a variety of information panels that provide context on content relating to topics and news prone to misinformation, as well as the publishers themselves. For example, a user viewing a video about climate change—regardless of the point of view presented in the content—will see an information panel providing more information about climate change with a link to the relevant Wikipedia article.

Information panels provide critical context as well as point users to reliable sources of authoritative information. For example, when a user in the U.S. watches a video about COVID-19, we display an information panel that points to the CDC’s official resource for information about COVID-19 and the Google search results page with health information from the CDC and local statistics and guidance. Beginning last week, when a U.S. user watches a video about COVID-19 vaccines, we show a panel that points to the CDC’s online resource for vaccine information, with an additional link to the Google search results page with local information about vaccination. To date, our COVID-19 information panels have received more than 400 billion views.

For the U.S. 2020 election, we provided a range of new information panels in addition to our existing panels to provide additional context around election-related search results and video content. For example, when viewers searched for specific queries related to voter registration on YouTube, they were shown an information panel at the top of the page that linked to Google’s “How to register to vote” feature for their state. When a viewer searched for 2020 presidential or federal candidates on YouTube, we surfaced an information panel with information about that candidate—including party affiliation, office, and when available, the official YouTube channel of the candidate—above search results. We also provided an election results info panel, the content of which was flexible in order to keep pace with new developments and key milestones along the road to inauguration. These panels were collectively shown more than 8 billion times.

REDUCE THE SPREAD OF BORDERLINE CONTENT: While we have strong and comprehensive policies in place that set the rules for what we don’t allow on YouTube, we also recognize that there’s content that may be problematic but doesn’t violate our policies. Content that comes close to violating our Community Guidelines but does not cross the line—what we call “borderline content”—is just a fraction of 1 percent of what is watched on YouTube in the United States. We use machine learning to reduce the recommendations of this type of content, including potentially harmful misinformation.

In January 2019, we announced changes to our recommendations systems to limit the spread of this type of content. These changes resulted in a 70 percent drop in watchtime on non-subscribed recommended content in the U.S. that year. We saw a drop in watchtime of borderline content coming from recommendations in other markets as well. While algorithmic changes take time to ramp up and consumption of borderline content can go up and down, our goal is to have views of non-subscribed, recommended borderline content below 0.5%. We seek to drive this number to zero, but no system is perfect; in fact, measures intended to take this number lower can have unintended, negative consequences, leading legitimate speech to not be recommended. As such, our goal is to stay below the 0.5% threshold, and we strive to continually improve over time.

This content is ever-evolving and it is challenging to determine what content may fall into this category. This is why we rely on external evaluators located around the world to provide critical input on the quality of a video, and these evaluators are trained with public guidelines . Each evaluated video receives up to nine different opinions and some critical areas require certified experts. For example, medical doctors provide guidance on the validity of videos about specific medical treatments to limit the spread of medical misinformation. Based on the consensus input from the evaluators, we use well-tested machine learning systems to build models, which in turn review hundreds of thousands of hours of videos each day to find and limit the spread of borderline content. These models continue to improve in order to more effectively identify and reduce recommendations of borderline content.

Our efforts here have been publicly validated in several ways. Researchers in the United States and around the world who have studied YouTube have acknowledged that YouTube’s recommendation algorithm actively discourages viewers from visiting radicalizing or extremist content. According to a 2020 study conducted by an Australian data scientist and a 8 researcher at the University of California, Berkeley’s School of Information: “...data suggest that YouTube’s recommendation algorithm actively discourages viewers from visiting radicalizing or extremist content. Our study thus suggests that YouTube’s recommendation algorithm fails to promote inflammatory or radicalized content, as previously claimed by several outlets.” In 2018, a study from Pew Research also found that, on average, our 9 recommendations point users to popular videos, which videos tend to be gaming, vloggers, and music rather than conspiracy theories or other types of misinformation—which again account only for a fraction of the content on YouTube.

REWARD TRUSTED CREATORS: In our mission to create and sustain an open, global platform, YouTube has also expanded economic opportunity for small businesses, artists, creators, journalists, rightsholders, and more. For many, sharing video content on YouTube is not just a hobby, but a business. Globally, the number of creators earning five figures annually increased more than 40% from December 2018 to December 2019.

Chris Bosio, owner of the Tampa barbershop Headlines, is one such creator. When initial business was slow, Chris used his free time to teach the other barbers in his shop new techniques. Impressed by his down-to-earth, easy-to-understand lessons, his business partners convinced him to upload a tutorial to YouTube to bring attention to the shop. The video was a hit, so Chris kept creating videos, and before long he saw his subscribers turn into clients. Within a couple of months, Headlines went from five clients a week to 800, many of them mentioning that they had watched Chris’s YouTube videos before coming into the shop.

As Chris’s channel grew, his subscribers began to ask him to make his own shaving accessories—a revenue stream Chris hadn’t considered yet. He learned how to make a shaving gel from YouTube tutorials and soon launched a line called Tomb45. YouTube is the main way Chris promotes the line, which creates a constant sales funnel for the company. With the support of his subscribers, Tomb45 sold over 70% of its inventory the first day. Today, Tomb45 sells 10,000 products a month in 15 countries. His YouTube business was a lifeline during the COVID-19 shutdown, when barbershops were forced to close. Without it, Chris isn’t sure his business could have survived.

Today, millions of channels from over 90 different countries earn revenue from their videos by participating in our YouTube Partner Program (YPP). Through YPP, creators earn revenue generated from advertising that is shown to viewers before or during a video. This revenue from ads is shared between YouTube and the creator, with the creator receiving a majority share—thus empowering creators to directly profit from their work.

But generating revenue on YouTube is a privilege, reserved for creators who meet specific eligibility requirements. To be eligible for YPP, a creator must have more than 1,000 subscribers, 4,000 watch hours in the last 12 months, and a track record of adhering to our Community Guidelines. YPP creators must also adhere to Google Ads policies, and any videos that are monetized must also meet an even higher bar by adhering to YouTube’s Advertising-Friendly Guidelines for content. These guidelines outline what content may not be 11 monetized and what content warrants limited monetization. If creator content violates any of our Community Guidelines, that content will be removed from YouTube. We also enforce our Advertising-Friendly Guidelines by limiting or blocking ads on videos in accordance with those guidelines. Creators who repeatedly violate any of our rules may be suspended from YPP for 90 days and need to apply again in order to rejoin YPP.

The 4Rs and Misinformation

We confront new challenges of balancing openness with responsibility every day. This is especially true when combating harmful misinformation on our platform. We invest a great deal of resources in research, policy development, technology, and experimentation to inform our approaches and improve our effectiveness in addressing misinformation on our platform. We continuously review our policies to evaluate whether the lines are in the right place; we prominently raise authoritative sources to the top of search results and make authoritative information readily available via a range of information panels; we regularly update our recommendation systems to hone our reduction of borderline content, including harmful misinformation; and we disincentivize creators seeking to profit off of misinformation by blocking ads on their content and suspending repeat offenders from our monetization program. We believe our 4R approach to responsibility provides a powerful and effective range of tools to combat harmful misinformation online, but we know there is more we can do. We commit to continuing to improve in our efforts to combat harmful misinformation on our platform.

Prioritizing transparency and accountability

At YouTube, we believe transparency is essential to earning and sustaining the trust of our users and our business partners. As a part of Google, we have led the way for the industry in terms of reporting on content removal at the request of governments and according to our own Community Guidelines, as well as information about government requests for information about users. We continue to expand our initiatives and the information we share, and we have rolled out three major resources over the last 12 months that underscore our commitment to transparency.

First, in May 2020, we collaborated with Google to launch the first Threat Analysis Group (TAG) Bulletin . The Bulletin—published on the TAG blog every quarter—discloses removal actions that Google and YouTube have taken to combat coordinated influence operations in a given quarter. Our hope is that this bulletin helps others who are also working to track these groups, including researchers working in this space, and that our information sharing can help confirm findings from security firms and other industry experts.

Second, in June 2020, we launched a website called How YouTube Works, which was 13 designed to answer the questions we most often receive about our responsibility efforts and to explain our products and policies in detail. How YouTube Works addresses some of the important questions we face every day about our platform, and provides information on topics such as child safety, harmful content, misinformation, and copyright. The site also covers timely issues as they arise, like our COVID-19 response, and our work to support election integrity. Within the site, we explain how we apply our responsibility principles—which work alongside our commitment to users’ security—to manage challenging content and business issues.

Third, YouTube publishes quarterly data in our Community Guidelines enforcement report. This report provides public data about the number of videos we remove from our platform for each of our policy verticals (except spam) as well as additional information about channel removals, views before removals, appeals and reinstatements, and human and machine flagging. And as noted above, just this month, we updated our report to include the Violative View Rate to reflect how effectively we identify and take action on violative content.

These are important steps but we know we are being called to do more so that we can be held accountable for the decisions we make—algorithmic or otherwise. We appreciate the feedback that we have received from Members of Congress on our efforts to date and look forward to continuing to examine additional steps that could be taken to build upon our transparency efforts. We will continue to expand the information we share through our transparency report, cross-industry initiatives, blog posts, public disclosures, and other mechanisms like tools for researchers. Our goal is to achieve transparency and accountability by providing meaningful information while protecting our platform.

Child safety and digital wellbeing

As discussed above, responsibility is our number one priority at YouTube—and nowhere is this more important than when it comes to protecting kids. We continue to make significant investments in the policies, products and practices to help us do this. From our earliest days, YouTube has required our users to be at least 13 years old, and we terminate accounts belonging to people under 13 when they are discovered. In 2015, we created YouTube Kids, an app just for kids that we created to provide a safe destination to explore their interests while providing parental controls. With availability in over 100 countries, now over 35 million viewers use YouTube Kids every week. We continue to expand product availability, add new features, and offer several parental tools.

We have also heard from parents and older children that tweens have different needs that were not being fully met by our products. As children grow up, they have insatiable curiosity and a need to gain independence and find new ways to learn, create, and belong. Over the last year, we have worked with parents and child development experts across the globe in areas related to child safety, child development, and digital literacy. This collaboration informed a recently announced new supervised experience for parents on our main YouTube platform with three content settings for parents to choose from. The YouTube supervised experience looks much like YouTube’s flagship app and website, but with adjustments to the features children can use and ads protections. For example, comments and live chat are disabled, as well as the ability to upload content and make purchases. Additionally, automatic reminders will appear for breaks and bedtime, which they can adjust to reinforce healthy screen-time habits.

In addition to these specially designed products, our YouTube main app treats personal information from anyone watching children’s content on the platform as coming from a child, regardless of the age of the user. This means that on videos made for kids—whether explicitly designated as such by the creator or identified as child-directed by our content classification systems—we limit data collection and use, and as a result, we restrict or disable some product features. For example, we do not serve personalized ads on this content on our main YouTube platform or support features such as comments, live chat, notification bell, stories, and save to playlist. To be clear, we have never allowed personalized advertising on YouTube Kids.

Conclusion

Technology that uses algorithms is critical to our day-to-day operations, both in terms of basic user-facing functionality as well as content management and moderation at scale. But so, too, is input from people, whether by evaluating context, providing feedback on the quality of videos, or controlling how and when they choose to use YouTube. Just as machine learning systems are constantly taking new inputs to hone their pattern detection and efficacy, we work continuously to address new threats and identify ways to improve our systems and our processes. Responsibility is and will continue to be our number one priority—our business depends on it.

We look forward to continuing to engage and discuss areas where we share priorities, and how we can join together to support research to identify novel approaches to problems that threaten both our users and your constituents, as well as thinking about how media literacy efforts can help users develop skills to build resiliency against misinformation.

Thank you for the opportunity to discuss our work with you today.

Testimony of Joan Donovan, Ph.D.: Research Director at Harvard Kennedy School's Shorenstein Center on Media, Politics and Public Policy

From “Get Big Fast” to “Move Fast and Break things” and Back Again.

Before there was “move fast and break things,” there was another animating ethic of the tech industry: “Get big fast!” This philosophy has proven to be good for the industry, but bad for the world. Over the last decade, social networking (connecting people to people) morphed into social media (connecting people to people and to content), which resulted in exponential profits and growth. Most people don’t know the difference between social networking and social media, but this transition was the key to products like Facebook, Twitter, and YouTube dominating global markets in mass communication. In short, networks are the wealth of society. Networks are where the rich and powerful derive their importance and high status, hence saying “he or she is connected” when referencing someone you do not want to mess with. When social media is the vector of attack against our democracy and public health, a small group of highly motivated and connected actors can manipulate public understanding of any issue simply by using these products they are as designed.

How social media companies got big fast was a combination of lax consumer regulation, eschewing risks, buying out the competition where possible, and a focus on scale that made for poor security decisions. Beyond connecting people and content, products like Facebook, YouTube, and Twitter rely on other companies and individuals to provide them with more data, increasing the scale in this massive and sprawling data infrastructure across the web. Mapping, tracking, and aggregating people’s social networks made social media a viable business because companies could sell data derived from interactions or monetize those relationships as other products, such as advertising, targeted posts, and promoted messages. Social media data should be legally defined at some point, but for now, I am referring to information about people, how they behave online, interactions with people and content, and location tagging.

But, it wasn’t enough just to collect and sort data on the product: targeted advertising and data services only become useful when paired with other kinds of data. For example, in Nov. 2012, when looking at different models for monetizing Facebook,

Zuckerberg wrote in a company email that allowing developers access to data without having these companies share their data with Facebook would be “good for the world, but bad for us.”1 This is because Facebook knew, even back then, that their products could threaten privacy on a scale society had never reckoned with before. Now, these social media products that favor runaway scale and openness threaten not only individual rights, but also the future of democracy and public health.

By leveraging people’s networks and content at the same time, a business model emerged where key performance indicators included:

(1) growth of daily and monthly active users,

(2) increasing engagement metrics, and

(3) advertising revenue.

The last decade has been marked by these companies expanding exponentially on all of these indicators. In a PC Mag article from 2011 about the best mobile apps, Facebook and Twitter were both ranked lower than an app that turns your camera into a flashlight. In 2011, Twitter had approximately 100 million users, Facebook had 845 million, and YouTube had 800k. By 2020, Twitter reports 353 million active users, Facebook reports 2.7 billion active users, and 2.29 billion for YouTube. Advertising revenue continues to grow across all of these products, where Google ($146 billion) and Facebook ($84 billion) dominate.

Using accounts as a key performance indicator drove a shadow industry of growth hacking, which eventually was integrated directly into the products—allowing a massive and known vulnerability of sock puppets, or fake accounts, to persist. For those who understood how to manipulate this vulnerability, increasing engagement meant delivering more novel and outrageous content, which is why false news, harassment, and defamation thrive on social media. For social media companies, decisions about profit drive innovation, not higher principles like access to truth, justice, or democracy. As a result, these products are not only a parasite on our social networks feeding off every click, like, and share, but they also cannot optimize for the public interest. It did not have to be this way. Back in 2011, mobile was developing quickly and there were many ways in which social media could have been designed to foster community safety and to maximize privacy. Instead, the drive to maximize the number of users, engagement, and revenue led us here.

Most crucially the entire internet infrastructure needs an overhaul, so that companies are not able to siphon data and leverage it to maximize an advantage over consumers. But, users are not necessarily the customers, advertisers are. The structure of online advertising pipelines systematically advantages these companies at the expense of several industries, most importantly journalism. By becoming the gateway to news audiences, top social media companies hoard advertising revenue that belongs to those who create engaging content for display on their products, most notably journalists.

When criticized about the squeeze their products have placed on journalism, Facebook and Google will cite their various news initiatives. But, these initiatives pick and choose partners and then channel journalists labor directly back into their products. Facebook’s fact-checking program, for instance, partners with several reputable news outlets, but labelling has done little to disincentivize fake news. Moreover, fact-checking is ad-hoc and will never rival supporting independent investigative journalism, a bedrock of a strong democracy. Instead, this initiative expands Facebook’s ever-growing web of influence over news as it becomes increasingly more difficult to criticize the corporation for fearing of losing resources.

Nevertheless, as journalism wanes, social media serves misinformation-at scale to hundreds of millions of daily active users instantaneously, especially odious when misinformation is promoted in trends and recommendations. In October 2020, I testified about conspiracies and misinformation having similar harmful societal impacts as secondhand smoke. Post-2020, we see misinformation-at-scale’s deadly effects in the US. Scammers and grifters use social media to sell bogus products and push conspiracies—including monetizing the pandemic in grotesque ways to sell fake cures or to scaremonger. Going into the pandemic, anti-vaccination activists had a huge advantage over public health officials, where anti-vaccination activists were able to leverage already dense and sprawling networks across social media products. As a result, they attached their strikes to breaking news cycles by attacking public confidence in science. There was nothing public health officials could do to stop the torrent of misinformation drowning doctors and hospitals, as evidenced by the reporting of Brandy Zadrozny and Ben Collins at NBC News. The same situation holds for other public servants, like election officials, who continue to bear the costs of election disinformation and are leaving their jobs because its managing misinformation-at-scale is unsustainable.

For journalists, researchers, and everyone trying to mitigate misinformation, the experience is like trying to put your hands up against a growing ocean swell as it washes over you. Journalists, universities, public service, and our healthcare professionals take on the true costs of misinformation-at-scale, which isn’t an existential statement. There are millions of resources lost to mitigating misinformation-at-scale, where the cost of doing nothing is even worse. For example, take the blatant lie that the vaccines have microchips. To counter it, journalists traded off covering other stories, while public health professionals continue to explain that there are no microchips in the vaccine.

The only way to fix a problem like motivated misinformers involves platforms enforcing existing policies, researchers and journalists working together as tech watchdogs, and policymakers opening the way for a public interest internet. Regulators should introduce public interest obligations to social media newsfeeds and timelines so that timely, local, relevant, and accurate information reaches the masses-at-scale. Together, we must make a public interest internet a whole-of-society priority.

Going Down the Rabbit Hole

What I have learned studying the internet over the last decade is simple: everything open will be exploited. There is nothing particularly new about misinformation and conspiracies circulating. After all, there is no communication without misinformation. However, over the last decade the design of social media itself created favorable conditions for reaching millions instantaneously while also incorporating financial and political incentives for conducting massive media manipulation campaigns. The most dangerous aspects of these products come to light when we analyze who gains an advantage when openness meets scale.

I often joke nervously that “my computer thinks I’m a white supremacist.” One only needs to look at my homepage on YouTube to illustrate this point. On the homepage, YouTube clearly displays your interests and makes recommendations. Daily, it recommends me content from a white supremacist who they have already banned, yet recent videos of his livestreams are continuously recommended. I first learned of the pandemic in January 2020 from a conspiracist and nationalist YouTuber, who was excited by shutting down the borders to stop the “Wu Flu.” I had spent countless hours down the rabbit hole with this YouTuber before, who that night in January 2020 spent over three hours extoling his xenophobic views. Racialized disinformation continues to be a critical source of political partisanship in the US because it is so easy to manipulate engagement on race and racism—and it’s profitable.

While some debate the existence of “the rabbit hole” on social media, our research team at Shorenstein has been looking deeper at this phenomenon. Going down the rabbit hole means getting pulled into an online community or subculture, where the slang, values, norms, and practices are unfamiliar, but nevertheless engrossing. There are four aspects of the design of social media that lead someone down the rabbit hole. They are:

(1) repetition relates to seeing the same thing over and over on a single product, (2) redundancy is seeing the same thing across different products,

(3) responsiveness is how social media and search engines always provide some answer unlike other forms of media, and

(4) reinforcement is the ways that algorithms work to connect people and content so that once you’ve searched for a slogan or keyword, algorithms will reinforce these interests.

Nowhere is this more prevalent than on YouTube, where any search for conspiracy or white supremacist content, using their preferred keywords of the in group, will surface numerous recommendations. It’s a misconception that these online echo chambers or filter bubbles are hyper-personalized and conclusively shape individual behavior in a specific direction. Instead, what algorithms tend to do is group people with homogeneous characteristics into buckets, who are served similar content in batches. From 9/11 conspiracies, to the “vaccines cause autism” meme, to QAnon, some conspiracist communities have been thriving on social media for decades. But, it is a misnomer, albeit a popular one, to imagine social media as an attention economy, where individual users are making independent choices of where to spend their time.

It’s more correct to call the rabbit hole an “algorithmic economy,” where algorithms pattern the distribution of content based on signals from millions of people according to generic profiles in buckets, coupled with algorithmic grouping in batches.

On its surface, the design is not insidious: the buckets and batches are related to generic interests. For example, if you’re a baseball fan and YouTube knows you want more sports content, that’s a great service. But if you’ve searched for more contentious content, like QAnon, Proud Boys, or Antifa recently, you are likely to enter a rabbit hole, where extracting yourself from reinforcement algorithms ranges from difficult to impossible. While customers, such as advertisers, have lobbied these social media companies for better ad placement, users are not able to easily swap out interests or stop targeted recommendations altogether.

Getting Out of the Rabbit Hole

My last point is about the past five years of social media shaping our public discourse. Social media provides a different opportunity for the enemies of democracy to sow chaos and plan violent attacks. It’s fourth generation warfare, where it is difficult to tell the difference between citizens and combatants. The reason why Russia impersonated US social movements in 2016 was expressly because movements elicit lots of engagement, where participants see sharing content and network-making as political acts. That kind of political participation was challenging for city governance during the 2011 Occupy Movement, but that moment—a decade ago—should have taught Facebook, YouTube, and Twitter more about the range of effects their products could have on society. Now we see these products used by authoritarians who leverage a mix of authentic political participation paired with false accounts and fake engagement to win elections.

Cobbled together across products, our new media ecosystem is the networked terrain for a hybrid information war that ultimately enables dangerous groups to organize violent events—like the nationalists, militias, white supremacists, conspiracists, anti-vaccination groups, and others who collaborated under the banner of Stop The Steal in order to breach the Capitol. Last week, a Buzzfeed article included a leaked internal Facebook memo on the exponential growth of “Stop the Steal” groups on their platform. The report clearly illustrated that groups exposing violent and hateful content can grow very fast on across the product. Even when Facebook removes groups, it does not stop the individuals running them from trying again. Adaption by media manipulators is a core focus of our research at the Shorenstein Center. Facebook found that their own tools allowed Stop the Steal organizers to leverage openness and scale to grow faster than Facebook’s own internal teams could counter.

In short, even when aware of the risks of their product to democracy, Facebook’s interventions do little to contain exposure of misinformation-at-scale to the general public. When determined to stop the spread of misinformation, Facebook could not counter it with their internal policies. Misinformation-at-scale is a feature of Facebook’s own design and is not easily rooted out. Because Facebook defines the problem of misinformation-at-scale as one of coordinated inauthentic behavior, they were woefully unprepared handle the threats posed by their own products. They were unprepared in 2016 and have since then been unable to handle the new ways that motivated misinformers use their products.

What began in 2016 with false accounts and fake engagement inflaming and amplifying societal wedge issues slowly transformed overtime into a coordinated attack on US democracy and public health. The biggest problem facing our nation is misinformation-at-scale, where technology companies must put community safety and privacy at the core of their business model, ensure that advertising technology is utilized responsibly, and quickly act on groups coordinating disinformation, hate, harassment, and incitement across the media ecosystem. A problem this big will require Federal oversight.

But I am hopeful that another future is possible, if tech companies, regulators, researchers, and advocacy begin to work together to build a public interest internet modeled on the principles that the public has a right to access accurate information on demand. The cost of doing nothing is democracy’s end.

Testimony of Tristan Harris: President and Co-Founder, Center for Humane Technology

Thank you Senator Coons and Senator Sasse.

I was featured in the Netflix documentary, The Social Dilemma, which has now been seen by more than an estimated 100 million people in 190 countries and 30 languages. The film burst into the public conversation because it confirmed what so many people knew and felt already: that the business model behind the social media platforms that have rewired human civilization with addiction, mental health problems, alienation, extremism, polarization, and breakdown of truth. Now the world wants to see real change.

Why did the film take off? Not because it spoke to a few nuisances of technology, but because insiders who were involved spoke clearly about why tech’s deranging influence was existential for democracy. Quoting the film: “If we can’t agree on what’s true, then we can’t navigate out of any of our problems.”

In the Cold War, the United States invested heavily in continuity of government. Faced with the threat of a nuclear attack, we spent millions on underground bases and emergency plans to ensure a continuity of U.S. government decision-making to maintain our capacity to respond to adversaries. But today, an invisible disruption to the continuity of U.S. government has already happened underneath our noses. Not by nuclear missile or by sea, but through the slow, diffuse process by which social media made money from pitting our own citizens and Congressional representatives into an online Hobbesian war of “all against all”– making agreement or good faith impossible, shattering our shared reality and effectively disabling our societal O.O.D.A. loops (Observe, orient, decide and act). The gears have jammed.

Meanwhile, we face genuine existential threats that require urgent attention: from the rise of China, to a climate crisis, nuclear proliferation, vulnerable infrastructure, to dangerous inequality. Today’s tech platforms disable our capacity to address these urgent problems.

That is why we must reset our criteria for success. Instead of evaluating whether my fellow Facebook, Twitter and YouTube panelists have improved their content policies or hired more content moderators, we should ask what would collectively constitute a “humane” Western digital democratic infrastructure that would strengthen our capacity to meet these threats. Instead of shortening attention spans, distracting us, competing for addiction and outrage… they would compete from the bottom-up to deepen and cultivate our best traits, sustained thinking and concentration, better critical thinking, facilitating easier ways to understand each other and identify solutions built on common ground. We should be interested in structural reforms for tech platforms’ incentives that would comprehensively strengthen rather than disable our capacity to respond to these existential threats, especially in competition with China.

My fellow panelists from technology companies will say:

● We catch XX% more hate speech, self-harm and harmful content using A.I.

● We took down XX billions of fake accounts, up from YY% last year.

● We have Content Oversight Boards and Trust & Safety Councils.

● We spend $X million more on Trust & Safety in 2021 than we made in revenue in an entire year.

But none of this is adequate to the challenge stated above, when the entire model is predicated on dividing society. It’s like Exxon talking about the number of trees they have planted, while their extractive business model hasn’t changed.

As The Social Dilemma explains, the problem is their attention-harvesting business model. The narrower and more personalized our feeds, the fatter their bank accounts, and the more degraded the capacity of the American brain. The more money they make, the less capacity America has to define itself as America, reversing the United States inspiring and unifying motto of E Pluribus Unum or “out of many, one” into its opposite, “out of one, many.”

We are raising entire generations of young people who will have come up under these exaggerated prejudices, division, mental health problems, and an inability to determine what’s true. They walk around as a bag of cues and triggers that can be ignited. If this continues, we will see more shootings, more destabilization, more children with ADHD, more suicides and depression— deficits that are cultivated and exploited by these platforms.

We should aim for nothing less than a comprehensive shift to a humane, clean “Western digital infrastructure” worth wanting. We are collectively in the middle of a major transition from 20th century analog societies to 21st century “digitized” societies. Today we are offered two dystopian choices: either to install a Chinese “Orwellian” brain implant into society with authoritarian controls, censorship and mass behavior modification. Or we can install the U.S./Western “Huxleyan” societal brain implant that saturates us in distractions, outrage, trivia and amusing ourselves to death.

Let’s use today’s hearing to encourage a 3rd way, to have the government's help in incentivizing Digital Open Societies worth wanting, that outcompete Digital Closed Societies.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & ...

Topics