Home

Donate

Red Herrings To Watch For At The Senate’s Child Safety Hearing

Vaishnavi J / Jan 29, 2024

Here are some red herrings you might notice companies and policymakers raise on January 31, and what they should be addressing instead.

The Senate Judiciary Committee will hear from tech CEOs on January 31, 2024, including (l-r) Shou Zi Chew (TikTok), Jason Citron (Discord), Linda Yaccarino (X), Mark Zuckerberg (Meta), and Evan Spiegel (Snap).

On Wednesday, Jan 31, the Senate Judiciary Committee will hold a hearing on online child exploitation with CEOs from Meta, X, TikTok, Snap, and Discord. This will mark the tenth hearing on child safety and teen mental health online in less than three years, more than any sitting Congress has ever held in the same period of time.

These hearings are always hotly anticipated in the weeks and months leading up to them, only to leave many viewers and advocates for youth rights feeling dissatisfied afterwards. Experts bemoan the vague statements from CEOs that are matched only by Congress’s banal questions around online harms that suggest a shallow understanding of child safety online.

The well-oiled run of show

After ten hearings in three years, both companies and policymakers have become pros at this. Policymakers will begin with a robust criticism of companies, arguing that they put profit ahead of people, and signpost various public and leaked documents that supposedly prove this. Companies will argue that they are responsible, thoughtful players who sincerely want to address these issues, and that individual documents cannot tell the whole story. They will point to their leaders’ personal virtues, e.g. their status as a parent, various metrics that allegedly show they take child safety seriously, and the number of industry-wide partnerships they have either joined or spearheaded to address these issues.

Policymakers will then address a wide range of children’s harms that they have heard these companies perpetuate. They will raise specific harrowing cases where children were physically or emotionally harmed, demand that companies explain why they allowed this to happen, and extemporaneously speculate on what broader product safety measures companies could have taken to avoid these cases of harm. Executives will raise their own specific examples of how positive their products have been for children’s lives, pointing to activists and influencers who gained a global platform because of their products. They may draw parallels to other cultural institutions like television and literature and imply that policymakers want to stifle these legitimate examples of free expression.

Executives will likely then move on to obstacles to actually addressing any of the harms raised by their products. They will highlight the potential negative consequences of any additional proactive measures beyond what they already do, and will likely argue that legislation – carefully developed in consultation with industry over a long period of time – is the only way to enforce these measures. Most significantly, they will suggest that the problems their products perpetuate are problems that already exist in society; that any attempt to require more responsible product design is a naive attempt to brush long-standing societal flaws under the carpet.

At the end, both parties will likely leave feeling bruised but triumphant about the theater that they participated in, with no additional real commitments or insights being shared by companies that could drive the conversation forward, leaving a challenging environment for young people intact.

Here are some red herrings you might notice companies and policymakers raise on January 31, and what they should be addressing instead.

“As a parent, I take the safety of children seriously.”

If they are in fact parents, company leaders and policymakers who attend these hearings are likely to reference their status as parents to show their commitment to child safety online. This is meant to imply that their experience as a parent gives them a unique sense of urgency about addressing these issues.

This is an appeal to emotion that we should dismiss as quickly as possible. Outside of government hearings, there is no evidence that company leaders harness their status as parents to actually consider the wellbeing of children on their platforms. Even if a parent can figure out what is best for their own children, it would be unrealistic to expect them to write policies and build products that affect millions of children globally from a range of economic, cultural, and social backgrounds. CEOs and policymakers are no different in this regard. Their status as parents is potentially helpful in understanding these issues, but they need to show that they’re using their expertise as parents to keep children safe online.

What to ask instead: Aside from being a parent, how have you worked to understand and address the challenges that children face on your platform? How do you actively keep up to date with the most pressing challenges facing young people online right now? Do you or any members of your C-suite have a sufficient background in developing products and policies for children?

“We employ a large number of people to work on child safety.”

Companies are likely to share how many employees work on these issues or what percentage of their staff is dedicated to working on these issues. This is an appeal to credibility meant to show that they are investing in addressing these issues, and doing their best to reduce harm.

However, employing a certain number of employees is not an indicator of a company’s intent to address harm. Companies make decisions based on the makeup of their leadership, and if they do not have senior leaders in trust & safety whose voices are heard and incorporated into decision-making processes, no number of junior employees will be sufficient to show the company’s intent.

A more relevant indicator of a company’s intent might be the percentage of its budget that is allocated to addressing child safety issues across the platform, or whether the company has a dedicated safety review process in place to assess whether any already launched or imminent products risk causing outsize harm to children, and a process to mitigate or eliminate those risks.

More significantly, good intentions cannot absolve CEOs of the harms that their products may be causing children, and the statistic is meaningless.

What to ask instead: How many layers of leadership separate your trust & safety leaders from the CEO? What processes do you have in place to ensure that your products do not cause harm to children, and what independent mechanisms with third-party child safety experts have you established to ensure that your products are not causing harm? How is your performance on child safety reflected in your earnings calls or in personal performance bonuses for C-suite leaders?

“Over the last x years, we have launched y number of tools / policies”

Skim a company’s corporate blog or transparency reports and you will find a number of triumphant announcements about new safety features that they have rolled out to keep the community safe. These can range from more proactive detection and protection measures, a wider range of user tools and user education for young people to control their experiences, or giving parents greater insights and controls over their children’s activity. Executives will reference these individual tools as an indicator of their commitment to safety.

However, individual tools and launches are not a substitute for a more foundational strategy around ensuring that young people are having age-appropriate experiences online. This requires user safety to be a basic design consideration in the development of any product, ongoing assessments of existing and new products, and an active process of iteration on existing products and policies. Launching a new set of user control tools without fundamentally examining the product architecture is a temporary Band-Aid at best. It transfers the responsibility around safe online usage to users who are in a crisis situation, the worst possible time at which they can be expected to make these choices.

What to ask instead: How do you incorporate responsible design into your product development processes? What are your internal review processes and escalation paths to ensure that any existing or new product meets a predetermined set of online safety requirements? Over the last five years, how often have you blocked products from launching because they were not safe enough for children, or withdrawn products from the market after receiving feedback on the harms they were causing?

“We regularly engage with a wide range of stakeholders and build partnerships across the industry.”

Child exploitation online is a difficult and sensitive subject that requires thoughtfulness to ensure children can still engage online in a free and healthy way. Executives are likely to cite their listening tours, feedback sessions with parents, internal task forces, or industry-wide partnerships as examples of their commitment to keeping children safe online.

However, none of these sessions necessarily result in binding changes to how companies build their products, write the rules of engagement, or enforce their policies for young people. The sessions provide valuable insights around youth autonomy, stages of development, and what healthy product and policy development can look like. Without a willingness to share the outcomes of these sessions, or how they incorporated this feedback into their product development process, it is not possible to assess whether such engagements are effective.

Companies also rarely share any insights into their methodology for engaging with stakeholders, leaving them free to engage with a specific subset that is likely to agree with their preexisting views. It is rarely clear whether they consult with young people, parents, and civil society across a breadth of socio-economic backgrounds or geographical territories.

What to ask instead: What were the insights you heard during your engagements and how did you incorporate the feedback you received into your products? Why did you choose not to adopt some of the recommendations you heard during these sessions? How do you ensure that your feedback gathering process is truly representative of the communities that you serve?

“Parents should be responsible for their children’s experiences”

In the last three years, parental controls have been at the forefront of every company’s discussion around children’s safety online. In March 2022, Meta first rolled out parental controls for Instagram and its Meta Quest headsets. Snap quickly followed suit in August 2022, and less than a year later, so did Discord, reversing its long-held position that it would prioritize the needs of its users rather than their parents.

The language used to support these launches is usually the same: Parents need to be in the driver’s seat of their children’s experiences. These controls allow parents to monitor how their children are spending their time across different apps, decide what is appropriate and inappropriate content or interactions for them to have, and engage in open healthy conversations with their kids about how to safely interact online.

Offloading responsibility for child safety online to parents is vastly impractical. Children use a wide range of apps for a number of different purposes, some of which may even be required for school. A parent would need to read and stay updated on safety guides from Instagram, TikTok, Discord, and Snap, as well as any number of other apps, then enforce this guidance with their children, a full time job. It also sets parents and their children up for conflict by default, positioning the parent as the censorious gatekeeper, instead of a partner in healthy online digital experiences.

This is probably why parental controls have such low adoption rates. Few tech companies publicly share data on how many parents have adopted or actively use parental controls on their apps, partly because the numbers would likely not be compelling. A recent literature review by Stoilova, Bulger, and Livingstone also highlights that younger, more digitally savvy parents are more likely to adopt parental controls. Parents are also likely to adopt tools after an incident of harm has occurred, for example cyberbullying.

What to ask instead: Why should parents be responsible for governing the products that you build, market to children, and profit from? How do you suggest parents keep up with the rapidly changing online environments that their children experience? How do you ensure that your products are age appropriate by design, and reduce the governing load on already-stretched parents?

No one hearing – or ten – is going to decisively change the way in which we regulate online platforms to be more responsible players. But more thoughtful and relevant discussions at hearings can pave the way for better, more practical regulation and industry wide standards that can support young people having healthy experiences online.

Authors

Vaishnavi J
Vaishnavi J is the founder & principal of Vyanams Strategies, advising companies and civil society on how to build safer, more age-appropriate experiences for young people. She is the former head of youth policy at Meta.

Topics