At Thursday’s Senate Commerce subcommittee hearing on Instagram and teen mental health, Senator Richard Blumenthal looked the part of an out of touch, “distinguished gentleman” who couldn’t possibly comprehend the empowering tools of technology–at least for a moment. In a clip that went viral on different social media platforms, Blumenthal asked a vexing question to Facebook Head of Global Safety Antigone Davis: “Will you commit to ending Finsta”? Davis responded as if she were a patient, tech-savvy niece explaining “the Interwebs” to her clueless older uncle. Blumenthal doubled down and sought to clarify that “Finsta is one of your services or products, is it not”?
Predictably, the exchange quickly went viral on Twitter. The vast majority of quotes attached to retweets of the clip reflected a common theme — politicians, like Blumenthal, are clueless and too antiquated to fathom the complex world of modern technology. The “dunking” on Blumenthal fits perfectly into a long standing, proto-libertarian narrative driven by Silicon Valley: techonologists are “makers of the world” and ineffectual or corrupt government agents exist to impede the creators from liberating the world through “tools of emancipation.”
Wonderful scholars and writers like Fred Turner, Evegny Morozov, Cathy O’Neil and Meredith Broussard have been writing about the dangers of over-reliance on technology to to solve complex social problems. Adam Curtis’ provocative film, The Century of the Self, hammers home this theme of technology-driven, consumerist, scientific rationalism, replacing government as the source of problem solving. Who needs the state when the market can meet all your needs? Blumenthal’s poorly phrased question fits nicely within this paradigm.
Sen. Blumenthal’s phrasing was clunky, no doubt. But earlier in the hearing, he gave a clear articulation of what Finsta is (a secondary account for circulation among one’s close friends and ostensibly out of the purview of parents) and how central it is to Facebook’s growth strategy. In addition, Blumenthal presented images of a “Finsta” account he (likely one of his staffers) created, posing as a 13 year old. In the hearing, he provided evidence that this fake account was inundated with suggestions to follow accounts that promoted “self-injury” and “eating disorders.” This uncomfortable truth places technology companies like Facebook in a different light. Rather than being creators of a platform that “brings the world together” and advances human freedom around the world, this narrative presents Facebook as a for-profit corporation with shareholder expectations of continued growth internally exploring the efficacy of growing at the expense of potential harm to children.
This hardly makes Blumenthal a tech-expert, but it does suggest that he, or his staff, clearly understand the ethical complexities that social media platforms present to us. There is a real conversation to be had regarding whether we should allow teens to create separate anonymous accounts. There are challenging ethical and human rights implications of eliminating Finsta accounts, something Evan Greer of Fight for the Future points to in this Twitter thread.
But “Finstagate” also points out another ethical question. Why are we as social media users so ready to sort social media posts into discrete categories? Why do we see or read a snippet of a much larger and more nuanced event that comes across our feed and feel compelled to share it and draw a definite conclusion about what the snippet means? It is almost as if we are acting like algorithmic classifiers quickly sorting data into categories.
Are we ourselves adopting an algorithmic mentality? Obviously being on algorithmically driven social media doesn’t make you start thinking like an algorithm. But in an era that prioritizes speed and visibility, are we pressured to quickly sort and classify things before others do? What are the implications of becoming a people who prioritize classification speed over being right? Why would we even do that?
Too much of the discussion about what algorithms do to us focuses on the “macro” level, what some call the “harm paradigm.” It goes something like this: “tech companies have an agenda. They manipulate us for profit by prioritizing salacious or otherwise controversial content to keep us on the platform and extract more information from us.” This perspective animates the very good work of people like Tristan Harris or Shosana Zuboff.
While there is more than a little truth to this perspective, it can also be way overblown at times. Online, as in life, reality is much more complex. A wonderful 2018 essay by Joao C. Maghales at the University of Groningen points out that algorithms are not leviathans that exploit unwitting users. Instead, users develop a virtue ethic through their engagement with algorithms. The platform’s power isn’t the complex code and design, it’s the way users attempt to modify their behavior in relation to what the algorithms reward. We have agency in choosing how to develop a virtue ethic, but only within the constraints that social media tools present us with. We can choose to be however we want on Twitter or Facebook, but we’re also presented with consequences for the choices we make.
All of us on social media are “subjects becoming.” Twitter produces the possibilities of voice, but does so under extreme time pressure. If you want to become relevant, you must have something immediate to say and that expression in real time must “connect” with an audience. Even if the connection is based on an inaccurate representation of a larger reality. Making fun of Senator Blumenthal’s “finsta-moment” allows users to tap into a powerful pre-existing discourse of private-sector tech innovation being superior to the muddling, stultifying bureaucratic state. Large numbers of Twitter users already resonate with this frame, particularly ones in tech. Alternatively, making fun of the clip from the perspective of those that know tech companies need more serious scrutiny means you mock him for not “getting it” in a way that “needs to be gotten.” Either way, there is a receptive audience that will provide a post with strong engagement.
How does pointing out a Senator’s cringe-worthy moment fit into a larger virtue ethic a user is trying to create? The “affordances” of Twitter create ethical dilemmas: if you pass up an opportunity to easily “go viral” with followers at one time point, do you dilute the ability to be heard when you want to say something more nuanced and important later? Visibility is an important ethical imperative for many people on Twitter. Is tapping into pre-set discourses to maintain an audience’s interest for more nuanced content to come simply a means to an end? The question then becomes how far users will go to be visible? Why do we share instances of elite cluelessness, even out of context? Maybe there is something uniquely American in taking elites down a peg. Tocqueville recognized this “levelling impulse” Americans have to see themselves as equal in stature no matter the differences in expertise or credential. Maybe in this instance, “dunking” on Senator Blumenthal legitimates the need for smart tech people to educate or to “serve as watchdogs” for clueless electeds. In any case, each of us have serious questions to ask ourselves about our “algorithmic virtue ethic.” To what extent does it include promoting content that distorts more complex realities to keep important connections with an audience of interest?
Dr. Marichal is a professor of political science at California Lutheran University. He specializes in studying the role that social media plays in restructuring political behavior and institutions. In 2012, he published Facebook Democracy (Routledge Press) which looks at the role that the popular social network played on the formation of political identity across different countries. His most recent work (with Richard Neve and Brian Collins) looks at they ways in which social media platforms encourage antagonistic political discourse and how they could be regulated. In addition, Dr. Marichal (with collaborators) is using computational social science methods on a number of projects including an examination of fracking debates on Twitter, a study of candidate branding in 2016, and a study on political talk on Facebook. In 2018, Dr. Marichal organized a mini-conference on Algorithmic Politics for the Western Political Science Association. Currently, he is working on a book that looks at the damaging effects of algorithms on democracy by creating an “algorithmic mentality” among citizens.