Home

Donate

Rethinking Far-Right Online Radicalization

Justin Hendrix / May 13, 2022

Subscribe to the Tech Policy Press podcast with your favorite service.

Researchers Alice Marwick, Benjamin Clancy, and Katherine Furl this week released Far-Right Online Radicalization: A Review of the Literature, an analysis of "cross-disciplinary work on radicalization to better understand the present concerns around online radicalization and far-right extremist and fringe movements."

In order to learn more about the issues explored in the review, I spoke to Marwick, who is an Associate Professor of Communication at the University of North Carolina at Chapel Hill and a Principal Researcher at the Center for Information, Technology, & Public Life (CITAP). We spoke about a range of issues, including:

  • The current state of knowledge about the spread of far-right ideas, extremist and fringe movements;
  • The differences in studying far-right movements in the post-9/11 context in which the study of radicalization emerged;
  • The role of the internet and social media;
  • The relationship to security and law enforcement interests and where to draw the line;
  • How these ideas relate to the events of January 6 and the future of American democracy.

What follows is a lightly edited transcript of the discussion.

Justin Hendrix:

Alice, how would you characterize your research interests?

Alice Marwick:

I study social media. I primarily use qualitative and ethnographic methods and I'm really interested in the social, political, and cultural implications of popular technologies like Facebook, YouTube, and Twitter. In the last few years, since 2016, my primary research stream has been around far-right disinformation.

Justin Hendrix:

I'm talking to you a day after the publication of your literature review on far-right, online radicalization. You start this out with a kind of level set. You say that “From the standpoint from which media, popular culture, and academia have often approached radicalization, the assumption that to study the radical is to study the other. However, white supremacy and racism are hardly new a phenomenon in America." You say that using the term radicalization suggests there is something novel and exotic about the spread of ideas that were actually fundamental to the founding of the US. Why did you feel it was important to start this with that level set?

Alice Marwick:

When you're studying the far-right in general, far-right cultures, most of these cultures are based on white supremacy as a sort of foundational aspect or a foundational ideology. When people have been using this term far-right radicalization, radicalization itself, as a term, has a really specific history. It comes out of this post-9/11 context where people were trying to figure out how to prevent further terrorist attacks that were primarily coming out of Jihadi communities. At the time there's this gigantic global security apparatus with a lot of money to try to understand why people who, in many cases, had grown up in, for example, the UK or the US, were committing terrorist acts, acts of political violence, based on Jihadi ideology.

The idea of radicalization was actually seen as a more progressive way to combat this phenomenon than previous efforts. Islamophobia is very dominant in the United States, right? Just go back to Edward Said and Orientalism. This idea of the Middle East as this sort of mysterious place of the other is pretty foundational to American society. For years there was this assumption in early studies of terrorism that there was something intrinsic to Islam that made people violent. Of course, now we know that, that's absolutely untrue. Rather than just painting everyone who is Islamic with the same brush, radicalization was a way to say, "Hey, there's certain strains of this Jihadi ideology that perhaps are more likely to make people commit political violence and we want to find out how that process happens and we want to stop it."

However, since there's been a lot of really great critical terrorism scholars who have pointed out that, that perspective on radicalization basically still is Islamophobic and still 'others' the global Muslim community, which is a billion people. A lot of these studies of radicalization did stuff like they surveilled mosques or they singled out particular people and they sort of looked at the social networks and tried to identify people who might be at risk of committing political violence.

Now, in general, most of these programs have not worked. They haven't been really successful and that's because it's very hard to determine who's going to commit political violence. Even if you have people in an organization where some of them are likely to commit terrorist acts, others are not, and trying to figure out why some people commit violence while others don't is almost an impossible task.

You take this term with all of its baggage that's coming from this perspective and you put it onto the far-right and it just doesn't fit, because the idea of radicalization is so firmly based in this othering of Islam as this mystical, mysterious, Orientalist culture and religion that when you try to apply that to something that, in many cases, is being parroted by very mainstream politicians, that is appearing in mainstream media, that is appearing in large hyperpartisan media that is a US based media, it just doesn't work.

Before I threw the baby out with the bathwater I wanted to really delve deeply into this radicalization literature and say, "Well, there's a lot of smart people who've worked on this for 30 years. What can we learn from this literature and how can we apply it to this phenomenon of people taking on far-right and fringe ideas that they encounter online and then, in a very small percentage of cases, committing political violence based on those ideologies?"

Justin Hendrix:

In a way, you're kind of trying to rescue the best of the work on radicalization in order to move to a new place.

Alice Marwick:

Yeah, I think that's the ideal. I think the problem is, what I found was that, radicalization research, when you get into the actual empirics of the kind of people who aren't just throwing around political rhetoric, but are actually on the ground doing these studies, you find a lot of uncertainty. There's been all this research that's tried to determine who commits political violence. It's not about your personality traits, it's not about whether you're mentally ill, it's not about whether you're in poverty, it's not about whether you're a member of a marginalized group. None of those things predict political violence. In fact, there is a whole body of research that looks at pathways to radicalization or pathways to terrorism. Basically what that concludes is the pathways are so individual and are so contextual that the same factor that might, in some cases, cause someone to commit political violence might prevent another person from committing political violence.

One of the studies that I really enjoyed was a study where this woman who did ethnographic work on women who were part of the El Salvadorian guerilla movement in the 1980s. Really incredible work, a researcher going to El Salvador and interviewing all these women who were involved in a radical political movement. She found that, for example, becoming a mother, which is a very typical part of many women's life cycle, in some cases women were like, "Oh, I can't... I've got to take a step back from this political movement because it's too dangerous and I have to be there for my kids and I'm just passed that stage of my life." In that case motherhood prevented them from committing political violence, but there was another group of women who were like, "I need to be a great role model for my kids and I need a better future for my children and the best way to do that is for my group to obtain their political goals." For them motherhood was a catalyzing element towards committing political violence.

Basically what most of this research finds is that any programs that are trying to detect radicalization, whether that's through computational methods or social science methods or outreach or civil society groups or activism, are mostly going to fail if they're focused on looking for commonalities between people who commit political violence.

Justin Hendrix:

I would encourage people to look at the review. You get into what makes people vulnerable to radicalization issues that are individual, psychological characteristics, systemic causes, movement level causes, other structural paths. How are people radicalized? You talk about pathways and pyramids, social networks, relational approaches, and this idea of radicalization is agented meaning making. What does that mean?

Alice Marwick:

That's really getting into the weeds of academic language. Basically there's all these different strands of research. There's this set of people that we're trying to figure out this universal pathway to radicalization. Basically we found that it's too different. You can't come up with one single path to radicalization. There's another group of people who tried to use social network analysis to map out, "Are you in contact with people who are radicalized or who are in these politically violent groups?" Are you then more likely to commit political violence? I think that's a slightly more useful area of research when you're talking about the far-right.

What we find is that what actually makes people more likely, or not even more likely to commit political violence, but more likely to justify the use of political violence, is for them to take on the same ways of thinking and feeling as a radical group. If we're looking at the far-right, I'm doing a bunch of other research on this right now and I'm spending a lot of time in far-right online groups and reading text from these groups, you see that there's these ways of thinking about oneself as a victim and about seeing people of color, specifically, or immigrants or feminists or trans people or queer people as the enemy, as the other. You take on that positionality. You think the same way that the other people in the group think. If you see yourself as having the moral high ground and the other people as being the enemy who, in some way, are victimizing you.

You're threatening white culture, you're taking economic opportunities away from white people, you're threatening white children, you're threatening the white family. All of these things make it more likely that you're going to justify committing violence against these people because they become a threat to you. Whether or not they're actually a threat to you is of no consequence. It's about this threat perception. To me, that is a really fruitful area of research when it comes to the far-right, because we know very well, from great work by many other scholars, that a lot of the appeals to white identity that are used in mainstream, in far-right rhetoric, are also diffused into mainstream right-wing rhetoric. Especially former President Trump I think was definitely somebody who amplified a lot of these messages around America used to be great and now it's no longer great and the people who are not making it great are immigrants and they're people from these S-hole countries and they're people who are coming in and they're dangerous and they're threatening our way of life.

You can see a really clear through line between that type of mainstream rhetoric and taking on these far-right or white nationalist’s or white supremacist’s values. I think that's extremely troublesome and that's distinctly different from the way that we see radicalization talked about when it is used to describe Islamic or Jihadi movements instead.

Justin Hendrix:

You eventually get on to what is the role of the internet in radicalization and you talk about issues around platform affordances, with a bit of a focus on YouTube I'd say, and then generally online discourse and how all this fits together. What is the role of the internet in your assessment?

Alice Marwick:

I think that's the ultimate question. How does the internet contribute to this process? On one hand there's this large body of research on how social media, mostly YouTube, contributes to radicalization. Most of this literature talks about how you go onto YouTube and you go on looking for something innocuous. Maybe you're looking for a video about the President or you're looking for a video on the environment or something, and then very quickly you find yourself in this rabbit hole of far-right content. A great deal of emphasis on combating online radicalization has been targeted towards platform companies trying to get them to deplatform people who are pushing this type of content, but also to change their recommendation algorithms so that people aren't being recommended far-right content.

I think that is still very important and I think it's a very fruitful area of research. A former coauthor of mine, Becca Lewis, who's now at Stanford, has done extensive work on how even pretty mainstream, right wing channels, like Ben Shapiro, contribute to mainstreaming the ideas of white supremacists, like Stefan Molyneux. I think that's still something that we have to be mindful of and not make false distinctions between some of this types of content.

On the other hand, I think it's a bridge too far to say, just because people are being exposed to online content that they're then taking up the ideas in that online content. My coauthor, Katie Furl, spent some time going through a lot of the computational social science studies of online radicalization and I have to say we were shocked that even when online radicalization is in the title and the abstract that there was no definition of what online radicalization was, that virtually no radicalization literature was cited. Again, these are large bodies of research with hundreds of scholars working actively in these areas. Frequently they were saying, "Exposure to extremist content equals radicalization," which is just not the way that media works.

What I can say about this literature is that it seems very clear that the internet contributes to mainstreaming extremist ideas. That it does put them in front of a larger audience of people. I think also, because there's so many opportunities on the internet, not just in mainstream social media communities or mainstream social media sites, but in all kinds of online communities and in smaller sites like Telegram, in specific hashtags or groups or communities on TikTok, but also forums, message boards, Facebook groups, whatever, there's lots of places where people are holding these ideas about the threat to white people from people of color. The threat to men from feminists. The threat to straight people from trans people, et cetera, et cetera. Within these communities you can have that kind of meaning making and taking on the thoughts and feelings of people in your community because there aren't any other ideas in these communities.

If people are moderate or they disagree with the thoughts and beliefs faced in these communities they usually leave and go somewhere else, so you do have this sort of effect where sometimes these get more and more extreme in their belief systems.

I've been looking at a lot of far-right Telegram channels and one of the things to me that's interesting is they tend to have several themes that get repeated over and over again. In some of the white supremacist Telegram channels I'm in you'll frequently see people post news stories where Black people are committing crimes and you see those every day. You might see 10 or 15 news stories a day about a Black person committing crimes. Sure, Black people commit crimes, so do white people, so do Latinx people, so do Asian-American people, but none of those stories are in there. I don't like to use the term echo chamber or filter bubble, because I think it's a little bit more complicated than that, but you do end up getting these communities where a single point of view is pushed very heavily. I think it does make people more likely to take on the points of view that they're seeing in these channels or these spaces given the amount of repetition of content and the encouragement from other people to take up this belief system.

Justin Hendrix:

One of the things you kind of hint at with regard to the role of the internet, that I think about a lot, is the long term implications of exposure to content. Not, "I saw a few YouTube videos that either I found in the long tail of stuff that I was looking for in the search tab or perhaps I happened to see in the recommendations column," but the idea that certain ideas have come to pervade the media, pervade society, whether it's Breitbart or Tucker Carlson, and that over time those things ultimately do have an impact and do, I suppose, lead people to far-right ideas and that's the bigger problem. It's hard to kind of study that with a deprivation study for Facebook or something. How do you get at that? I mean, YouTube's still playing a huge role in that. They're still a major channel for that activity and yet... I don't know. How do you think about their responsibility in that context?

Alice Marwick:

There's a couple of questions there. The first is how do we study the long-term effects of media messaging? In media and communication studies we often look at what's called cultivation theory in the work of a media theorist named George Gerbner. George Gerbner was interested in the long-term effects of television, specifically around violence. He did these longitudinal studies where he measured people's attitudes towards various social issues over long periods of time.

What Gerbner found was that watching this type of content over a long period of time led to what he called the Mean World Syndrome, which is basically that viewing a lot of cop shows or local news which has the, "If it bleeds it leads," kind of point of view, it makes people more fearful about the world around them. It leads them to greatly overestimate the rates of crime. It leads them to distrust their neighbors. It leads them to isolate themselves. These are obviously not prosocial effects. These are things that are bad for society, but you wouldn't necessarily see that as being a long-term effect unless you had done this type of study. It wasn't so much about the specific news stories themselves, but it was about the patterns of news coverage over a long period of time.

When it comes to social platform responsibility it's really difficult, because as you know social platforms will say that they are just hosting this content, they are not providing editorial functions, they don't have... The scope and scale of the sheer amount of content on a site like YouTube means that there's, realistically, no way for content to be moderated at the level that it might need to be in order to make sure that this type of messaging isn't promoted. I think, also, it's important to remember that a lot of this messaging is really mainstream. The idea that immigrants are dangerous is found all over the place. Even though it may not be accurate it is a mainstream talking point and part of mainstream political rhetoric.

How do you cut that off without violating people's free speech rights? How do you cut that off without making it seem like you're biased against a particular point of view? Because the lines between mainstream and extremist rhetoric are so porous, I think it's a real challenge. One of the things that I want to do with my work is point out the porosity of this and say we need to stop thinking about this stuff as extremist, we need to stop thinking about this stuff as radical, and start grappling with and reckoning with the fact that ideas that just 10 years ago would have been seen as unspeakable are now things that people encounter every day. What are the implications of that?

Unfortunately, I think one of the implications of that is that we're going to see increased justification for political violence and more propensity for things like the January 6th attacks.

Justin Hendrix:

Ultimately, is “online radicalization” a useful concept?

Alice Marwick:

We do not think that radicalization is a useful concept to think about people taking up far-right and fringe ideas that they encounter online. I'm not exactly sure what we should replace it with. That's kind of the project I'm working on right now. A few of the literatures that we've been looking at are the literature around mainstreaming. Mainstreaming is a primary strategy that far-right groups have adopted in the last 10-15 years to try to get their viewpoints into mainstream rhetoric. One of the things that's really interesting about studying the far-right is that a lot of the studies come out of Europe because generally European companies have many political parties. In Europe you often see a radical right party or a far-right party that is a party that has seats in parliament, that runs people for office, but that is what we could consider to be far-right. They're anti-immigration, they're, in some cases, explicitly white nationalists.

In the United States we don't have a multi-party system, we have a two party system, so the political strategy that the far-right in the United States has adopted is to try to mainstream their points of view through the Republican party. Take the party that they felt would be most sympathetic to their points of view and try to push far-right ideas through that. Unfortunately, I think we can all agree that, that's been pretty successful, especially with the candidacy and presidency of Donald Trump. Then we see other far-right and more far-right candidates being elected to office around the country.

I think mainstreaming is actually a very generative concept to use to think about this and I think that it gets us away from this idea of the radical and the other and instead helps us see these commonalities in messaging and sort of how a lot of these messages are on a continuum rather than being on one side or the other.

The second concept we've been drawing from is the idea of conversion, which comes out of religious studies and is when somebody becomes a born again Christian and there's this idea that they've had this moment and their entire life has changed. This is a concept that I think resonates a lot with the people in these communities who often talk about their red pill moments. They often phrase as if it's a single moment where they realize that everything they've been told is a lie and the scales fell from their eyes and all of a sudden they realized that feminism was a lie or white people are superior or whatever it is that they believe; that they've had this red pill moment.

We see a lot of that happening in these communities. People will discuss their red pill moment, they'll talk about how it feels. This is very similar to how people talk about conversion, but in actuality it's much closer to a process of socialization and it's not usually something where you're on the road to Damascus and there's a light in your eyes and you fall down and you're like, "I've been red pilled." That's really not, generally, how these things work. I think the literature might give us some really interesting insights into how it does work.

The third literature we've been really looking at is on fringe communities that aren't necessarily political. Stuff on flat Earthers or there's an internet group called Otherkin who are people who believe that they're actually animals or mythical creatures or people in UFO communities. People who are basically taking up viewpoints that are counterfactual to the mainstream, but they're still believing them. What can we learn from these studies? Because they're much more about conventional online communities. People have been studying online communities since the early '90s, so we have kind of a lot of literature on that and a lot of good studies on that.

That's what we're doing right now, is we're kind of delving into that literature to see what might be a better concept than radicalization to describe people taking up these ideas they encounter online.

Justin Hendrix:

I want to get you to speculate a little bit about maybe some of the downstream implications of your work. I was listening yesterday to a talk by someone who is an expert on the global internet forms of counterterrorism, the GIFCT, who was talking about this whole category of TVEC content, the terrorist violent extremist content. Those ideas and modes of working and engineered systems, there's an enormous amount of investment into thinking about radicalization and thinking about how to kind of prevent social platforms from playing a role in it or from spreading the worst artifacts of it. How do you think that behavior or that activity may change if the ideas that you're exploring here are potentially adopted?

Alice Marwick:

I am not an expert on Jihadi radicalization. That's not my research area and I would defer to people who are experts in that area. I think it's very clear that there are a lot of Jihadi movements, like Al-Qaeda, who have used the internet to try to reach out to young people around the world and get them involved in their political violence and their movements for political violence. I think that's a pretty well trod path. There's entire databases of the types of content that these groups put out. There's people who spend their lives studying the types of videos that are made by these groups or the types of messaging that they have. That's something that I feel like there is great critical work, in terrorism studies and radicalization studies, that hasn't necessarily filtered down to that practical, countering violent extremism on the ground work, and I'm not going to do anything to change that.

The place where I specifically want to make an intervention is in applying these models to the far-right, because the United States, in general, has been very loathe to use the same tools to tackle domestic terrorism that they have been to tackle foreign terrorism and there's some good reasons for that. There's plenty of people who do not think we should make domestic terrorism a crime. There's progressive abolitionists who don't think that it would be a good idea, because it just increases the carceral apparatus. If you look at who maybe labeled a domestic terrorist, President Trump and many people on the right believe that Black Lives Matter is a terrorist organization, they believe that Antifa is a terrorist organization. If you open these tools up to people there is a chance that they're going to be used against people of all political persuasions and stripes and that makes me nervous.

I think if we're going to start trying to counter the increase in far-right political violence, then understanding how that happens and not trying to take a body of literature that was developed in a completely different cultural and national context and apply it to something that, in many cases, is very deeply rooted in histories of the United States, in families, in communities, in the stories that we tell ourselves about who we are as Americans, that's not going to work. That's where I'd like to make that intervention. As we start moving towards countering far-right political violence with a tool box, lets make it a tool box that's actually appropriate for the problem that we're trying to solve.

Justin Hendrix:

Coming back to January 6th. There's a select committee which is tasked with a range of different things. One of them is looking at the role of online platforms in the events of January 6th, whether those online platforms played a role in facilitating or otherwise enabling or empowering the individuals who attacked the Capitol. How does this thinking fit with your considerations around January 6th as an event, which I know you've also written about.

Alice Marwick:

My colleague, Francesca Tripodi, and I submitted a report to the select committee on January 6th and we were specifically looking at different ways in which there was a feedback look between stuff that was going on between social media and mainstream politicians and the Trump administration. One of the things we talk about in the report is the stop the steal groups that were seen as being very organic, but the term was actually created by Roger Stone. He registered the domain back in 2016. It was a narrative that had been primed by Donald Trump for months and months and months and months leading up to the election. In another example the conspiracy theory that Dominion voting machines were converting votes for Trump to votes for Biden is actually a theory that comes out of an 8chan board where Qanon research is going on and it gets picked up extremely quickly by people in the Trump administration and then all the way up to Trump himself.

One of the things we need to understand is that this is not stuff that's happening in a vacuum online. This is part of a larger ecosystem which includes hyperpartisan publications, mainstream conservative media, and mainstream conservative politicians. If we ignore that part of the loop then we ignore the role of political elites in spreading disinformation, which I think is incredibly important, because it's very easy to say, "Oh, well this is Facebook's fault, this is YouTube's fault, this is Twitter's fault." Yes, I think the internet certainly makes it easier for these ideas to spread and it definitely makes it easier for people to coordinate direct action, like the January 6th attack, but you can't take the responsibility off of this other half of the equation.

That's sort of what I'm hoping that the January 6th Commission is able to express, is the culpability of different people in the administration and different political elites and what happened on January 6th. That it wasn't just an isolated incident that came out of the internet, that it had several years of history leading up to it and that it was a participatory process between all these different actors working in tandem.

Justin Hendrix:

I feel like we have to somehow preserve, certainly, blame where blame is due with those political elites and other actors in that ecosystem as well as the individuals who ultimately injured police and broke down doors and windows, and yet it doesn't seem right to let Facebook off the hook either for the role that it played and which it identified in its own research. I don't know. I hope that the committee's able to kind of get across that nuance in these public hearings that are coming.

Alice Marwick:

I think it's a really complicated issue. I think the question of how much do we bring social platforms into it... Frequently, when I do my work and I kind of talk in public about how these issues are really complicated, people are like, "You're just trying to let Facebook off the hook," and I'm absolutely not trying to let Facebook off the hook. One of the most interesting things about the Facebook Papers, to me, was the extent to which Facebook has known all along about the problems on its platform and has been very disingenuous publicly about things like spreading disinformation or inculcating communities of people who are talking about these things.

I think the best thing Facebook could do would be to listen to its internal researchers, because I think it's important to remember that these gigantic corporations are not monoliths and that there are a lot of good actors and people who work at Facebook who honestly want to work for a company that is doing good things for the world and want to prevent problems from happening on the platform. But in talking to various researchers and people who work in trust and safety teams and things like that, I find that often if these researchers or these workers are advocating for changes that go against the growth impetus of the platform any changes they suggest are going to be shot down. If they're like, "We want to add these anti-harassment messages," or, "We want to do this," and if Facebook's growth team finds out that it causes a .002% decrease in the number of people who use the platform they're not going to implement that.

I would really like to see Facebook actually take a look at the kinds of great work that are being done internally and pay attention to the people who are saying, "No, actually, this could make the platform better and safer for people," but that's very difficult. That's not going to add shareholder value right away, right? It's not really how big tech companies work with what their priorities are.

Justin Hendrix:

I should hope they can ultimately, eventually, some day, some how take a decision to maybe follow the advice of those researchers, particularly when it comes to these very serious matters rather than following their commercial interests, but something tells me that may be too much to ask.

Alice Marwick:

I mean, they have a fiduciary responsibility to their shareholders to maximize value.

Justin Hendrix:

I want to ask you about one last thing. It's January 6th related and I think related to this generally. In the months since January 6th, 2021 there has been a renewed focus on far-right extremism in the United States, the administration's come out with a strategy for contending with it, the FBI has made various investments and changes to focus on it, the Department of Homeland Security has done the same, and I presume quite a lot of the other law enforcement and intelligence apparatus across the country. We've even seen the FBI invest in more social media monitoring tools and tinder at least, for what it calls predictive analytics. I sort of sense that baked into that is this idea that computational social scientists are eventually going to create dashboards that are going to allow us to predict when certain communities or people may be about to commit political violence. What do you make of that?

Alice Marwick:

I'm not a fan. This is getting into my other research stream, which is on online privacy. These types of predictive analytics or predictive algorithms, they sound great, because the idea is we're going to take out human judgment, we're going to make a scientific judgment based on the numbers, we're going to use these cutting edge big data techniques to figure things out. The problem is that all of these algorithms and predictive capacities of these different technologies, they're very prone to failure, they're absolutely subject to the quality of the data that's fed into them which is always full of bias. Any data scientist will tell you that data itself is very messy and murky. It's not as clean cut or as clean as people would have you believe. Often there's absolutely no check on the power of these algorithms.

For example, predictive policing algorithms are really problematic because they often suck up information from social media, completely decontextualized. It's this sort of Minority Report situation where just by associating with people who may be in a social network with somebody who's committed some kind of crime or is perceived to have committed some kind of crime can make you also seem like a threat. Often these tools are used most harshly on members of already marginalized communities.

I am not a fan of these approaches at all. I think they are very simplistic. I think they massively increase surveillance. I think there are plenty of things that we already know about in terms of low hanging fruit. For example, we know that there are lots of people who are members of militias or the far-right who are involved in local police around the country. That is a known issue. The FBI has written about it. It is something that could easily be... Well, maybe not easily, but it's something that doesn't require algorithms to identify. These people are putting pictures of themselves on Instagram with white power insignias. There's so many things that could be done that are more messy and complicated and require resources that are not as neat and clean as these algorithms, but would probably, actually, do more to prevent the problem.

I generally am against predictive algorithms in social services or policing because I think they're intrinsically biased and often hurt the people they're supposed to protect.

Justin Hendrix:

Alice, what's next? What's the next big project?

Alice Marwick:

The next project is figuring out how do people come to believe these ideas that they encounter on social media? We've been doing this three part research project for the last year and a half. We've been analyzing discussions of how people got red pilled in far-right forums on Reddit, Gab, Discord. The second part is doing ethnographic field work on far-right Telegram and conspiracy TikTok. What I'm doing right now is I'm interviewing people who have taken up alternative or fringe ideas that they've encountered online that are not far-right ideas. I'm interviewing conspiracy theorists, big foot enthusiasts, people who are into astral projection. It's been a really, really interesting project and I'm learning a lot about how people are converted to different belief systems, what counts as evidence online, how these communities function. I'm hoping in the next year or so to have, ideally, a kind of big theory of how these things happen.

Justin Hendrix:

Well, when that theory's ready I hope you'll come back and tell me more about it.

Alice Marwick:

Absolutely, Justin. Thanks for having me.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & Inno...

Topics