The Evolving Trust and Safety Vendor Ecosystem
Tim Bernard / Jul 24, 2023Tim Bernard recently completed an MBA at Cornell Tech, focusing on tech policy and trust & safety issues. He previously led the content moderation team at Seeking Alpha, and worked in various capacities in the education sector.
In September of last year, just before the inaugural TrustCon, the annual convention of trust and safety professionals, the Tech Policy Press Sunday Show podcast was entitled “Trust and Safety Comes of Age?” The following month, Elon Musk acquired Twitter and proceeded to decimate its trust and safety operation. This was followed by extensive cuts and hiring freezes at other large platforms, including Meta, Google and Amazon. Since then, Donald Trump has been admitted back on to Twitter, Facebook, and YouTube, misinformation policies on COVID and elections have been rescinded. Last month, the newsletter Platformer asked “Have we reached peak trust and safety?”
This month, I attended the second TrustCon in San Francisco. With over 800 attendees, the sold out event was three times larger than last year’s. Despite the scenario described above, the mood was upbeat, with professionals across the industry, along with civil society representatives, regulators, and academics excited to spend time connecting and learning from each other. There is a certain esprit de corps among those who spend their days working in the dark trenches of the internet; that sensibility was on display in the slogans on stickers and fridge magnets distributed to conference participants.
Perhaps unsurprisingly, the professionals at TrustCon were somewhat optimistic about the future of the field, while acknowledging the pain of the current downturn. The first driver of this perspective was the ongoing baseline need for keeping platforms usable; as one trust and safety leader told me, “people will continue to be assholes on the internet, and users won't stand for it.” The efficiency of creating masses of problematic content has also received a significant boost with the availability of generative AI. The second reason was the EU’s Digital Services Act (DSA), which will have a substantial impact on several aspects of trust and safety work, and was the subject of several panels at TrustCon.
In order to fulfill their trust and safety goals and policies—and in some cases requirements for DSA compliance—some platforms are turning to a growing ecosystem of vendors. Many of these were, naturally, TrustCon sponsors and exhibitors, but they were also well-represented among the speakers, and seemed to constitute a new locus of energy and investment in a field that has been feeling the squeeze from the tech downturn. A number of VCs were present, specifically interested in investing in trust and safety startups.
A hundred thousand moderators
Some of these operations are not new. First-level content moderators (some 100,000 of them, as estimated by one TrustCon attendee) are employed by business process outsource companies (BPOs). Many of these companies have been around for decades, and their mix of clients has evolved with time, though typically customer service call centers are their mainstay. TaskUs senior vice president Sean Neighbors admitted that his company’s revenue in this area has been flat for a few years, and that content moderation clients have moved more of their services offshore to save money. In a comment echoed by other BPOs, he also mentioned that labeling services for machine learning models are taking up an increasing amount of their business, perhaps balancing the downturn in content moderation spending.
Another trend for BPOs has been increasing the sophistication of the services they offer. TaskUs is bringing its content moderation and financial crimes services departments together to allow for sharing of connected expertise. Wipro is offering moderator wellness services developed for its own employees to those who work for other platforms too. In a connected development, Teleperfomance, which had announced that it was exiting the content moderation business last year following bad publicity about its moderators’ exposure to traumatic experiences, said it had reversed that decision and is doubling down on efforts to keep its employees healthy. At TrustCon, the company revealed the members of a new safety advisory board, which includes, among others, South Asian civil rights campaigners Ranjana Kumari and Nighat Dad, and Sarah T. Roberts, a UCLA professor who has written critically about the outsourced moderation industry.
Moderation platforms for content platforms
The most prominent startups in trust and safety have emerged in the last few years, offering a core product to facilitate content moderation on platforms, along with supplementary services and differing emphases. These companies are worthy of discussion in greater detail as they represent the biggest shift in how trust and safety is evolving as a field. Somewhat unusually for the US-focused tech platform industry, key players in this space are based in France, Israel and the UK, as well as the US, though all have alumni of large platforms in their senior ranks who know the necessities and the failings of the legacy systems.
Although these companies were founded during the 2017-2021 trust and safety boom, the downturn has meant that platforms have less inclination and ability to develop in-house content moderation tooling. With the DSA requirements looming, subscribing to a service that claims to be compliant out-of-the-box is all the more appealing. Nevertheless, some shared that they have experienced some slowdown in completing new deals over the few months.
ActiveFence was the first of this set to emerge and, after establishing an intelligence gathering service that proactively detects violating content on platforms, now offers what is plausibly the most complete solution for content moderation (with a freemium model available for small platforms). CEO Noam Schwartz emphasized the importance of making insights of all kinds actionable by integrating them into the moderation platform. With a recent $100 million dollar funding round completed, part of Schwartz’s plan is to consolidate the market, including by acquiring smaller services and absorbing their advantages into ActiveFence’s end-to-end product.
Although all of these platforms promise some form of DSA compliance, the French company Tremau offers standalone consultancy services as well, and undeniably has deeper expertise in DSA compliance, with two of its leaders having been involved in the development of the legislation. While its current platform clients are in the 60-100 moderator range (i.e., medium-sized platforms), Tremau’s CEO, Louis-Victor de Franssu, confirmed to me that the company’s advisory clients included three of the 17 platforms designated by the European Commission as Very Large Online Platforms (VLOPs).
Founded by former Meta employees, Cinder says itis built with an awareness of organizational complexity at larger companies, and so promises appropriate access for all the teams that may need to touch trust and safety processes. Co-founder Brian Fishman came to Meta from a counter-terrorism and national security background, and Cinder seems particularly tailored to respond to the adversarial side of trust and safety, touting systems that are easy-to-reconfigure without engineering expertise.
Checkstep, a UK firm, is also focused on regulations, and may be well-placed to ensure compliance with the expected Online Safety Bill. A newer, more lightweight, entrant in this category, Cove, manages proprietary iterative classifiers for their clients, and offers platforms the opportunity of integrating the APIs from independent tools and services (like those discussed below) into its platform.
Individual tools and services
The next category of vendors sells more discrete services to the platforms. Some of these can sell to cost-conscious platforms with the promise of automating a previously labor-intensive process. Others identify such critical issues that it may well be unwise to end their contracts.
These fall into a few subgroups. The first are tools that identify violations that are on the fraud side of the industry, rather than the content moderation side, and may be more established as companies. Pipl’s CEO, Matthew Hertz, explained that his identity verification service has previously been used predominantly by ecommerce sites; only in the second half of last year did it launch a less expensive level of their product aimed at social platforms. A speaker from Sift gave a presentation on how she went “undercover” on dating platforms to identify the characteristics of pig butchering scams to better detect them for the company’s clients.
The second main grouping is providers of AI services, mostly content classification. Hive, which is ten years old, has classifiers for a huge range of content types. Newer companies like Unitary, which classifies video content with greater understanding of its context, have emerged along with more recent advances in the technology (disclosure: I also write for Unitary). Perspective API is a free service created by Google’s Counter Abuse Technology team and its social good team, Jigsaw.
A third, smaller, group includes companies which offer more targeted intelligence and investigation services. LegitScript, for instance, specializes in identifying illegal sales on marketplaces, a function that requires substantial expertise for global platforms serving countless jurisdictions. Crisp was recently acquired by Kroll, a well-established corporate intelligence consultancy.
In between
In this fast-evolving landscape, these categories are porous. One interesting example is Safer, created by Thorn, a nonprofit fighting child sexual abuse online. It is a hybrid between a platform and a discrete tool: focussed exclusively on the narrow category of child sexual abuse material, using custom AI classification, and providing a full regulation-compliant workflow platform. Trust Lab (started by former Googlers) has a trust and safety analytics product, but is also building out an end-to-end solution specifically for DSA compliance. Based in Silicon Valley, it seems to be well-connected with potential clients, counting five of the ten largest platforms among current customers. Spectrum Labs started out with AI classification products and now also offers a full, modular moderation platform.
A maturing industry
Underpinning TrustCon and the work of the Trust & Safety Professional Association that hosts it is the conviction that platforms must collaborate to share expertise in order to advance the field and internet safety. Platforms also have a range of formal and informal ways to share specific information, including through bodies like GIFCT and the Tech Coalition, as well as through the Meta-hosted data pipeline ThreatExchange. It is possible to think of “vendorization” as the next step in this process: the vendors credibly claim to combine the insights they glean from each of their clients (as well as their dollars) to better identify trends and provide cutting-edge solutions to all their customers.
Two of the vendors I spoke with offered analogies between trust and safety and adjacent fields. Cinder’s Brian Fishman cited Alex Stamos as comparing trust and safety with cybersecurity ten years ago, and Pipl’s Hertz offered the analog of the online fraud prevention industry. Both of these fields have evolved to a place where there are a range of widely-accepted standards and it is normal practice for large companies to purchase a range of services from outside vendors, which facilitate and complement the work of their in-house professionals. This appears to be where trust and safety is headed too. Schwartz, from ActiveFence, spoke of an emerging “common language” for trust and safety technology.
(This development also brings risks, especially if access to trust and safety resources is centralized to a small number of vendors. The recent Scaling Trust on the Web report (Annex 2), facilitated by the Atlantic Council’s Digital Forensic Research Lab, noted the tendency of trust and safety startups to be acquired by platforms for internal use only and calls for the development of more shared and open-source tooling.)
In the current environment, it would be surprising if any trust and safety budgets were growing, and so any spending on vendors must come from somewhere. The vendors I spoke to were reluctant to explain who amongst the platform teams they might be replacing; they were adamant that in-house trust and safety teams would always be necessary and it did not appear that any of the vendors sought to replace the entire operations of their clients. However, one attendee suggested to me that engineering roles closer to the core product tended to be more prestigious, implying that few engineers would shed tears at fewer jobs working on trust and safety tooling.
Despite all these new vendors and services, there are still gaps. Trust and safety product leader, Juliet Shen pointed out in a Threads post that the startups seem to be clustered around moderation flow and AI labeling, and listed a number of other common tasks that could benefit from innovative, purpose-built tooling. This suggests that the vendor ecosystem still has plenty of room to grow in the variety of services being offered. Regardless of economic winds, the field of trust and safety will continue to advance, and the evolution of the vendor ecosystem will play an important role in determining what its mature state will look like.