Home

Donate

Using Verification to Help Social Media Users Recognize Experts

Dallas Amico-Korby, David Danks, Mara Harrell / Jul 17, 2023

Dallas Amico-Korby is a PhD candidate in Philosophy, David Danks is a Professor of Data Science & Philosophy, and Mara Harrell is Teaching Professor of Philosophy and Public Health at the University of California San Diego.

Verification has become a hot topic for social media platforms of late. For example, both Twitter and Facebook are experimenting with paid verification, as well as different ways of distinguishing different types of users (blue check marks, gold check marks, and gray check marks abound!). Most of the focus around verification has been on authenticity and impersonation—how do we make sure that users can determine whether an account with the name “LeBron James” is actually LeBron James? But while authenticity and impersonation are surely important, we believe that verification can and should be used to do much more. Specifically, social media platforms should verify experts and use verification markers to help users recognize experts on their platforms.

When you type a query like “back pain relief exercises” into TikTok or Facebook’s video search, you’ll be presented with a wide variety of results: people claiming to be doctors with miracle advice “Dr. Bill’s 2 quick tips to alleviate back pain (INSTANTLY!)!”; people claiming insight from their own experiences “Back pain HACKS I discovered while recovering from surgery”; and people using medical jargon “3 Scoliosis stretches to improve spine curvature and reduce pain NOW.” For people with back pain and no medical training, sifting through these results to find quality information can be confusing and overwhelming. Is Dr. Bill actually a medical doctor? Which of these videos were made by hucksters or snake-oil salespeople? Even if the creators’ are who they claim they are and their motivations are pure, do they know what they’re talking about? None of this is clear from scrolling through the list of videos, reading their descriptions, or even (for most people) from watching them.

Unfortunately, this disorienting experience is not limited to TikTok and Facebook. Many platforms that provide a space for informational content leave to the user the task of evaluating the expertise of content creators despite the understandable challenges that people face with this task. That this disorienting experience is so familiar reveals that there’s a problem with the design of these social media platforms. The nature of this problem becomes apparent when considering three specific challenges users face when trying to navigate today’s information ecosystem as individuals with limited capabilities in a highly complex world.

Challenge 1: None of us can be experts in every field, nor should we attempt to be. There are simply too many fields– no one person can be a plumber, bridge engineer, urologist, climate scientist, financial analyst, hair stylist, lawyer, veterinarian, philosopher, and basketball player all at the same time. Most people count as very successful if they master even one of these things.

Challenge 2: The fact that we can’t be experts in every field, implies that we need to trust and rely on others; we need to rely on the “division of cognitive labor”. Alone, we may be able to make good decisions in our areas of expertise, but we will struggle to evaluate evidence and make good decisions in the many areas where we lack expertise. However, by relying on those who have expertise we can greatly improve our decision making. Of course, this will only work if we can successfully determine who to rely on in those fields where we lack expertise. We benefit from the guidance and insights of experts, but only if we can distinguish experts from novices (even well-intentioned novices).

Challenge 3: It’s difficult and time consuming to recognize experts. Expertise has to do with reliably getting things right. Expert plumbers are able to reliably fix sinks. Expert bridge engineers reliably design bridges. And so on across a range of expertises. However, for lay people, it can be difficult to assess reliability: most of us don’t have the time to follow around multiple plumbers to determine which ones are getting things right. Most of us don’t have the resources or knowledge to evaluate whether a particular medical doctor has reliably diagnosed and treated her patients. And we certainly don’t have the time, resources, and know-how to make these evaluations in all the fields where we need to rely on experts in our daily lives.

So, what are we supposed to do? In our offline lives, we often rely on institutions to do the work of identifying experts for us. For example, we don’t typically attempt to determine from scratch who to trust for medical advice. Instead, we rely on things like medical licenses or the fact that a doctor is employed by a reputable medical practice to determine whether to trust them. There are similar institutions that do this work for a host of fields. You don’t have to evaluate the expertise of your pilot, because the FAA ensures that the pilot flying your plane is licensed and competent. You don’t have to make sure that your local bridge was built by someone who knew what they were doing, because the engineers have been vetted by a government sanctioned licensing institution. And so on.

Unfortunately, when we log on to social networks, the work of these institutions is often left behind. When you go to a doctor’s office, you know the person you’re talking to is a credentialed expert. But when you scroll TikTok, you don’t know whether “Dr. Bill” is actually a Doctor. We’re left with the overwhelming and impossible task of evaluating who has expertise on our own. Not surprisingly, people often fail at this task. As a result, we trust the wrong people, fall for misinformation, and fail to make optimal decisions.

Verification to the rescue

Our Proposal: social media platforms should use verification systems that are already in place (or that are easy to create) to not only authenticate users, but also help to indicate verifiable expertise.

One might immediately raise a host of worries about this suggestion. First, social networks arguably should not be empowered to tell us who to trust. Second, expertise recognition is (as we’ve pointed out) time consuming and demanding work, and social media companies likely have little reason to pursue it. Third, this task would risk drawing social media companies into tricky politically charged debates, accusations of bias, or simply the risk of costly errors. Finally, this approach seems to undermine the ability of content consumers to consider alternative voices if they choose; one might think that people should be permitted to pursue the advice of whoever they’d like, even if that advice is not grounded in expertise.

We recognize and are sympathetic to these concerns, but we will show that platforms can implement our proposal without running afoul of any of them. The key is for platforms to offload the work of recognizing experts onto credentialing institutions that already exist. There’s no need for platforms to reinvent the wheel here; there is no need for platforms to do the work of recognizing experts themselves. Instead, they should leverage reliable, trustworthy, and high functioning credentialing institutions. The role of credentialing institutions is to credential, the role of platforms is to gather and display information about who possesses the relevant credentials (a relatively simple task for companies used to working with big data).

Of course, the obvious next question is: which credentialing institutions should platforms use verification to track? The natural answer is “track the reliable institutions,” but that simply pushes back the question one level: how should platforms practically assess the expertise of a credentialing institution? A sensible starting point is to track democratic government sanctioned credentialing institutions. First, democracies often invest in and create robust credentialing institutions because they have reason to: (1) ensure the safety of their people, (2) ensure that their people have the opportunity to flourish (by avoiding bad advice), and (3) promote an informed public. This also implies that governments and other institutions have likely spent time determining when—and in what disciplines—credentialing is important and when it is not. Thus, social media platforms do not have to spend time determining which fields require credentialing and which do not. Second, while not perfect, government sanctioned credentialing institutions have incredible track records. From universities to medical licensing boards, the products of such institutions speak for themselves.

There are a range of ways that social media platforms could implement this suggestion. One extreme is what we might call the “school model” where platforms only allow content from verified experts, much as schools (aim to) ensure the expertise of their instructors. This approach may be best-suited for educational platforms and other settings where the relevant knowledge is concentrated in a small(er) group of identifiable experts. The downside of such an approach is the obvious risk of excluding the voices of non-experts, who might nonetheless have valuable insights to share.

On the other extreme, platforms could create some way of marking what informative content was produced by experts—for example, by using a display feature similar to Twitter’s check mark system that indicated that an account was run by an expert and produced content in their area of expertise. This sort of platform would help users easily recognize experts, but also give users the freedom to consult those who aren’t credentialed if they wish. This model would put users in a situation similar to one they’re often in offline. If my car breaks down, I can consult my mechanic who I have good reason to believe knows what they’re talking about. But I can also choose to consult my neighbor, in which case it’s up to me to determine whether they know what they’re talking about, and whose advice I take at my own risk. Of course, this model is not fool-proof; for example, many licensed doctors have spread Covid misinformation. Nonetheless, it provides some guidance to users who are too often left entirely on their own in identifying experts on social networks. (Proof of concept for this model can be seen by considering YouTube’s attempt to implement something like this proposal for medical content in response to the spread of Covid-19 misinformation on their platform.)

There are, of course, a wide variety of models in between these two that platforms could adopt. For example, a platform could: prioritize verified experts in search, have different comment and reaction policies for credentialed experts, restrict sharing of non-expert content, and so on.

Zooming out

When the internet was younger, there was considerable optimism that it would: greatly improve decision making, give people greater intellectual autonomy, and result in a more informed public. The argument for this optimism was relatively straightforward. Before the internet, evidence-based information was primarily housed in libraries, in universities, in institutions, and in the minds of experts. But these were spread out across the world and often difficult to access for the average person. The internet, however, could easily contain all of this information and be accessible to the average person.

Of course, the internet has made information accessible. There are more than 5 billion internet users worldwide, and as Bo Burnam put it, “anything that brain of yours can think of can be found” there. But despite the internet being widely accessible and brimming with information, the supposed payoff is—at best—in doubt. While people increasingly rely on the internet, especially social media, for news and information, they often do not trust the information they’re consuming. If someone asks you why you believe something, and you respond that you saw it on TikTok or YouTube, they’re likely to scoff. Instead of becoming trustworthy sources of information, like traditional institutions, these platforms have become places where people (reasonably!) are slow to believe.

So, what went wrong? Why has the promising vision of the early internet failed to materialize? Surely there’s no single or simple answer to these questions (for other explanations that play a role see: here and here). But in our view, a significant part of the story is the fact that, in democratizing the internet, we left behind many of the institutional features of the traditional but less accessible institutions that made them trustworthy in the first place. Doing so made information accessible, but at the cost of making it difficult to determine what content is trustworthy. If we are to make our social media platforms–and the internet more widely–epistemically better, we must begin to reintegrate some of these lost institutional features.

Authors

Dallas Amico-Korby
Dallas Amico-Korby is a PhD candidate in philosophy at UC San Diego. He works on epistemology and the philosophy of technology.
David Danks
David Danks both develops novel AI methods and examines the ethical and societal impacts of AI technologies. He is a Professor of Data Science & Philosophy at UC San Diego, and serves on multiple advisory boards, including the National AI Advisory Committee.
Mara Harrell
Mara Harrell studies how students can effectively learn critical thinking and writing skills. She is Teaching Professor of Philosophy and Public Health at UC San Diego, and the Editor-in-Chief of the journal Teaching Philosophy.

Topics