Home

Donate

Is an Anti-Fascist Approach to Artificial Intelligence Possible?

Justin Hendrix / Mar 23, 2025

Audio of this conversation is available via your favorite podcast service.

What is necessary to develop a future that is less hospitable to authoritarianism and, indeed, to fascism? How do we build collective power against authoritarian forms of corporate and state power? Is an alternative form of computing possible? Dan McQuillan is the author of Resisting AI: An Anti-fascist Approach to Artificial Intelligence, published in 2022 by Bristol University Press.

A follows is a lightly edited transcript of the discussion.

Justin Hendrix:

Good morning. I'm Justin Hendrix, editor of Tech Policy Press, a non-profit media venture intended to provoke new ideas, debate, and discussion at the intersection of technology and democracy.

A number of researchers who study the intersection of technology and politics point to the concerning ways in which artificial intelligence can be deployed in the interests of authoritarians. Back in 2019, the Carnegie Endowment's Steven Feldstein wrote in the Journal of Democracy that, "Around the world, artificial intelligence systems are showing their potential for abetting repressive regimes and upending the relationship between citizen and state, thereby accelerating a global resurgence of authoritarianism."

Six years later, look around. That potential has been converted into a stark reality as we face the prospect of the failure of democracy in the United States and the continued trend toward illiberalism and authoritarianism abroad. As someone put it to me recently, "It makes sense. AI, in many ways, is a tool to put more information, more systems, more people under the control of centralized technologies."

Today's guest has been thinking about these things for years, and a couple of years ago, just before the catalytic moment sparked by the release of OpenAI's ChatGPT, he published a book that asks us to resist and reimagine artificial intelligence through a liberatory rather than a repressive lens.

Dan McQuillan:

I'm Dr. Dan McQuillan. I'm a senior lecturer in creative and social computing at Goldsmiths, which is part of the University of London. And I'm the author of a book called Resisting AI: An Anti-fascist Approach to Artificial Intelligence.

Justin Hendrix:

I'm excited to speak to you about this book today. I want to ask to start just how you came to this curiosity around the intersection of artificial intelligence and technology, politics from a PhD in experimental physics?

Dan McQuillan:

Physics I suppose has a sort of noble minority tradition of people holding their values and dissenting from the mainstream. I think that there's a group which has been reconstituted recently called Science for the People, and I'm pretty sure they got their beginnings in the American Physics Association in a dissenting faction who were complaining about the way the physics establishment had basically signed up to the Vietnam War.

Anyway, that's not my bio. I did the PhD, and then I decided that industrial-scale science was not the exciting project of finding final answers that I'd really hoped for. So I took quite a side turn into working with people with mental health problems, working with people with learning disabilities in the community. And I did a lot of that kind of stuff and then found that a lot of those organizations I was working with needed people who worked with computers. So obviously, I'd had quite a lot of experience of that doing a PhD in experimental particle physics.

So I started to get back into bridging those two worlds in a way, really. Things that were working on a sort of grassroots level to help people, plus IT. Followed that line through the beginnings of the internet, through the beginnings of the web onto web 2.0. And you probably remember that time when things seemed just very exciting and open, and there was this possibility of, oh, maybe this stuff can really help us reconfigure social relations. I did a thing called Social Innovation Camp in different places like Kyrgyzstan and Sarajevo and places like that to try and bring together. Okay, can we use technical change or can we hybridize technical change or social change?

It was a good thing. Looking back on it, some of the ideas were not well grounded, but the process was good. And eventually, I worked in places like Amnesty and Human Rights doing digital human rights and related things, and then stumbled backward into academia about 10 or 11 years ago and been there ever since. And I just have that kind of foot in both camps, really.

I saw the rise of big data, as it was called then. Paid a bit of attention, it was in my area, I'm in a computing department, and moved onto networks and related technologies. I really wanted to understand what they were doing. And I guess because of the background in maths, actually, particularly the maths I did for my PhD gave me a kind of quick head start on that stuff. And s,o when I looked into it, I realized certain things about the claims being made. And I was already political, I've had also parallel to all that stuff a long history of being active in social movements, so just put two and two together really.

Justin Hendrix:

My listeners are used to hearing me interview authors of books and talk through their ideas, but I'm coming to you a couple of, three years after you published this book. The fact that you wrote this book, Resisting AI: An Anti-Fascist Approach to Artificial Intelligence, in 2022, just prior to the launch of ChatGPT and the beginning of the current frenzy around generative AI, this particular form of AI, were you aware when you published the book that we were on the cusp of this moment of enormous curiosity in this technology? Did you see it coming, or did it hit you like it did me? Almost, I feel like I was almost surprised that people were so surprised when ChatGPT came along.

Dan McQuillan:

A bit like that, really. I'd mucked about with GPT models a little bit more to demonstrate the limitations of these things. And the book was published as you say, it was pretty much six months before the ChatGPT moment. I didn't see that happening and again, I was a bit bemused at the time. It's taken me some time in retrospect to figure out the different components of why people found it so compelling and in particular, which obviously is a moving target, why so many entities are prepared to invest so much of various kinds of capital in it, whether it's financial capital or political capital or social capital. That in itself is a question that has an evolving answer, but I did actually also find it very interesting, the catalytic moment it had on people's perceptions. Although I now feel I've got a bit more of a grip on that. But I thought it was crap from the beginning and, generative AI that is , and I still think is crap.

That bit didn't change, and it wasn't something that I'd addressed in the book and then in retrospect, which we could discuss obviously, while I think generative AI or other transformer models and their kind of social impacts are absolutely worthy of particular attention for all the reasons that we are alluding to, that they've caught so much public attention and they made, they took... When I wrote the book, I wrote it partly as a warning about an invisible technology that was pervading administrative structures of the state and administrative structures in general and was acting as a kind of fake ordering force of society that was unaddressed and then obviously went to a situation where my mum was calling me on a weekly basis telling me about AI being on the news again and what did I think of all that? It's open to discussion.

But I still think that the broad thrust of arguments put forward in the book about AI in general actually holds true in general terms for generative AI, which has its own particular sort of knock-on effects. And that's come together recently with the sort of fusion of Silicon Valley and far-right politics being very resonant with exactly what I was talking about in the book.

And yeah, it was a surprise to me what happened with generative AI. I didn't expect the transformer models or large language models to become that good at simulating the things they were trying to simulate so quickly. And I didn't anticipate people's enthusiasm for it. But the wider social and political impacts I think were things more that I already had handle on, I would say.

Justin Hendrix:

Just for anybody who literally opens the first page of this book, they will find the statement, quote, "This book is about how and why we should resist the introduction of artificial intelligence," unquote. I think some people might put the book down there, they might say, "Oh, Dan's too extreme. Aren't we folding proteins? Aren't we hastening the production of some forms of information? Aren't we variously creating innovations that serve the species?" Typically, the argument you hear from centrist or even liberal or left politicians, I mean we heard this quite a lot at the Paris AI Action Summit from world leaders. There's just so much benefit and yet also risk and we've got to learn how to take advantage of the former without getting caught out by the latter. So why do you start with this extreme statement? Why hit the reader in the face with it?

Dan McQuillan:

At the time as well, I was operating a situation where orders of magnitude less people were interested in an AI. So in fact, I was trying to get people interested in it and also head it off at the same time.

I find the AI apparatus, which is what I would call the wider accretion of things around a particular technology. So you've got AI, it's a real thing, does real things. What those things are worth is another matter, but it does real things. It's not an empty shell. And then you've got the modes of the kind of epistemologies and politics that assemble around that and the institutional forces. I call it all that stuff in apparatus. And it is for me worthy of so much attention sometimes. Exactly, because there are so many resonances with the particular things that are happening in that apparatus and broader social and political dynamics, which I'm very interested in.

And one of those things that you just brought to mind for me is centrism. It's a really interesting time to be saying isn't the center the right path here and we should have a balance, and a bit both sides. That question is being asked about what I wrote about AI at the same time that I would say it's very clear here that centrism as a sort of political approach is completely inappropriate to our times. Because it's if anything being responsible for incubating the very far-right forces that it complains about and has offered absolutely no resistance to them, which after all is what I'm talking about in general.

So, I am talking about resistance to AI rather than resistance to. I'm not talking about a political movement per se, I'm talking about the social and political and economic and environmental effects of a particular technical direction, which is AI. And I'm particular about that, when I say AI things built on neural networks, deep learning and above, which would include transform models. Would include the kinds of reinforcement learning that did the protein folding, which isn't generative AI at all by the way. And my critique would apply to all of that stuff because I think has certain particular characteristics. And in the book what I try to do is build up from the basics so I try to say or try to explain what I think AI actually is. This stuff, neural networks and those technologies. Why the way they work has certain immediate consequences such as introducing what I think of as basically a fundamental opacity to inner workings and a real inability to say exactly why did it predict that thing? Why did it pick that person out as particularly risky?

And that's something obviously anybody who's familiar with terms like due process or any of those kind of pragmatic juridical democratic sort of ideas would immediately go, okay, that might be a problem. When I talk about those kinds of things that immediately flow from the technology that's directly present, including the collateral damage that comes from the unreliability of these models in the first place. Then I try to look at, okay, what kind of effects are those going to have if they're introduced into a social context and particularly an administrative context because that's what I was especially looking at.

So I make the argument that actually what AI is doing is very similar to in a way what bureaucracy does. It distances people, it creates a vector of thoughtlessness and so on and so forth. So I look very much at the relationship between AI and structural violence, which is the way people are prevented from getting what they need because of particular administrative and structural measures.

So it's trying to work out specific technology, look at the social impacts, look at the institutional impacts, and then look at more, not geopolitical, but the sort of wider impacts in the context of a world that is still very much post austerity, still very influenced by the financial crash, influenced by covid, although it's trying its hardest to deny that ever happened. And very influenced by the climate crisis. All these things are very powerful shaping dynamics. So what happens when you take this particular kind of technology with particular affordances, let's call it, and drop those in this world, the kind of forces we're familiar with, crises that we're familiar with. And I argue that AI is in fact what I label as a necropolitical technology, one that is going to lead to people unnecessarily dying in one way or another.

And yeah, then there's another part of the book that tries to be positive about, not about AI but about paths beyond it.

Justin Hendrix:

You get into why quote-unquote fixes for some of the problems that artificial intelligence seems to introduce are likely to fail, and you talk about the general sort of solutionism that is brought up around artificial intelligence. I think that's one of the more interesting things. I think a lot these days about what it must be like to be a political leader, a leader of a state, or a policymaker on some level. You're sitting here faced with a huge number of interconnected, complicated problems. You've got to somehow be accountable, at least on some level to public opinion. You've probably seen these systems a quick solution. These guys over here at Microsoft and Google and OpenAI, any number of other firms are saying, look, we've got a magic box, and you've got problems, we can work on those problems. We'll fix them with artificial intelligence. And eventually, not only will we solve those problems, we'll solve all your problems. We'll give you the magic answer. I feel like if I were in their shoes looking at that dashboard, the aroma of that elixir would be quite enticing.

Dan McQuillan:

People clearly drinking heavily of that elixir, and we're all going to experience the hangover. Maybe one of the resonances, there are a lot of resonances going on. One of the resonances that I could throw in there, which is something that the book also attempts to provide a reading of, as I say, epistemology: how is the world known through these devices? And I think that's very relevant to questions of let's say state action or state perception, particularly in relation to I would say a cousin of that part of the book might be something like Seeing Like A State. Saying that there are certain ways of understanding the world that are particularly amenable to what we understand as the modern state. And I think it's worth understanding, for example, the effect of seeing the world in that way. Also, the history of where that way of seeing the world comes from in relation to something like AI.

Because I would say that the problem for somebody in a position that you described, and a disclaimer on my part, feel very little empathy and sympathy for people in those positions, for a number of reasons. But I would say they've stuck because they are essentially already both a product and a symptom of a system that is set up not to be able to solve these problems. Because it isn't entirely productive of these problems and is not in any way meant to solve them. In that sense we could also think back to Safa Beer's mantra, which I really cleaved to: the purpose of a system is what it does. And in that sense, this actually political system we've got is not intended to solve any of these kinds of problems so it's hardly surprised that people find themselves slapping around if they feel they're expected to solve these problems. But more than that, AI itself isn't entirely consistent. Both product of that system and an extension of it.

That's another thing about this stuff, that primary mess of AI which are worth debunking, is the idea that it came from somewhere else. That either it came from deep inside the labs or deep inside Geoffrey Hinton's brain or from the future or something like that, and then comes to the world. Then we have to figure out how we're going to govern it and how we're going to make the nice bits stay and the bad bits go away. AI doesn't come from somewhere else, it's entirely a product of all of the forces. That in fact, it's a product very directly a product of the same forces that caused the financial crash in the first place. The same people who've invested massively in this particular kind of technology for these particular purposes.

Let's remember AI as we know it is an extremely narrow technology. To the extent that even people in the industry are, and I earwig as much as possible in industry narratives and dialogues, mainly through listening to podcasts. But what they say to each other, and there's even a recognition and concern in the industry itself that putting all our eggs in the transformer model basket, which is what's happening at the moment, is a very unecological approach in the sense it's just very narrow. This is a tiny representation of all possible spaces of kinds of technology you could develop, let alone kinds of technology you could develop with this kind of resource.

But that's not an accident. You need to do a close reading of AI in a sort of humanities sense and understand its geniality, understand where it's coming from to understand what it's going to do. And so it comes from exactly the same place to see the same things and to take the same kind of measures as the system that produced it. So it isn't offering any alternative. So if the system itself is... And I said the system itself, I'm talking about representative politics and the neoliberal political economic system. If those things are driving themselves towards let's say a crisis of planetary limits, if nothing else, then AI is absolutely not going to change that direction, it's going to intensify and amplify it.

And at the same time, it does serve this purpose you mentioned. While it does that, it also provides this sci-fi narrative of solutionism. Maybe just to finish, if you look at any of the problems that AI itself has encountered, obviously the central paradigm of AI is scale, but another kind of repeated discourse in AI is that if there is a problem with AI, what we need to solve that is more AI. And in a way that's a nested version of the same thing that's happened with these states that find themselves unable to deal with the problems that they themselves are really mostly culpable of creating or at least facilitating in the case of the financial crash.

By piling on AI, they're basically saying, okay, what we need to fix this problem is more of the same kind of behavior that caused the problem. AI is fundamentally a speculative technology. It was speculative financial innovation that created the financial crash. If you look inside any deep learning model, you've got exactly the same kind of fungible speculation of data as you had in those financial models. So we've got a problem, we've created a mess. Let's pile on more of the same.

Justin Hendrix:

The first half of this book, you go through a range of different types of problems that AI introduces or intersects with. Many of those are familiar to Tech Policy Press listeners. We talk about these issues all the time on this podcast, the intersection with all forms of precarity, economic precarity, labor issues, mis- and disinformation. You talk about climate, you talk about the intersection with various other forms of injustice. But you spend a particular moment on the relationship of artificial intelligence to fascism, and that seems to me to be a word that's on a lot of people's lips at the moment. Has anything in the last couple of years changed the way you think about that relationship? Or are you simply seeing your conceptual framework studded with real-world examples?

Dan McQuillan:

Yeah, oh dear. It is a bit like I do sometimes say to students that... I do teach a course in ethical computing, for example, and point out how many times Silicon Valley, in particular, seems to take a sci-fi movie that intended as a dystopian allegorical warning and converted into a business plan. Just as you said that I had this kind of moment of terror that somebody at some point read this book a few years ago and thought, "This is a great plan. Let's really deepen our fascistic leanings by piling on more AI."

Anyway, we are all trying to get a handle on, I guess certainly people listening to this podcast particularly, trying to get a handle on what exactly is happening with the regime transition in the USA and its effects for the rest of the world. Its implication and entanglement with both technical means and the sort of mindset, the Silicon Valley mindset, and if you look at the dominant forces within the AI side of that... Actually if I go back to the book and look at it, and I think actually I did name those forces and dynamics in the book, explicitly talked about the impact of things like near reactionary political ideologies and so forth.

That moment was basically an intuition or a sort of gut feeling drawing on my sensibilities towards both sides of the process. I have some felt understanding of what's going on with the technologies and I also have my own political experiences over the years, and those two things fuse very strongly to me with an intuition of where this thing was going. And I guess what's happened a lot of the time over the last few months even, it's really just deepened actually, that those things were actually pretty spot on and with some expansion. I think I didn't articulate so much in the book as I've tried to do a bit more since, why, for example, the fascistic tendencies we have now, the far right and authoritarian regimes are so much the product of the liberal rules-based order as it likes to see itself, and how much the dynamics with which it has produced its hand, gravediggers in a way, is the same with the technical process. That AI itself, for example, claims to solve certain deeply rooted structural social problems, actually intensifies them while acting at the same time as a diversion. It's very much a kind of parallel with how liberal political commentators deal with the far right. They claim to want to address those problems, really add fuel to the fire by trying to manipulate the far right as a diversion to the problems they themselves created. Anyway, something like that.

I've tried to articulate those things a bit more, and actually I was only just re-reading an article I wrote a couple of months ago which talked about a situation in the UK and the Labour Party here. The Labour government issued an AI plan in January, 2025. So I wrote an article on Computer Weekly called “Labour's AI Plan, a Gift to the Far Right.” And it was a critique of all of the stuff we talked about, the environmental consequences, the solutionism and all of that, but it was also trying to say, again, why they're trying to use this solution in a very particular way. They're trying to use these solutions as an answer to the kind of social issues that literally created far right pogroms that tried to burn hotels with asylum seekers in. Trying to imagine that they're throwing AI at the problem under this rubric of addressing structural issues in society while piling on nationalist rhetoric, as hard as I can I was really trying to bring out why this is all the same kind of mistake basically. So extremely long way to say now actually it seems pretty much what I thought.

Justin Hendrix:

Is there anything new in DOGE and Elon Musk and what you're seeing play out? You write this critique of AI, its relationship to the administrative state, to bureaucracy, yet there still seems something particularly juvenile dystopian... I don't know what the word is I'm looking to describe it. A kind of gonzo ridiculous nature to what we're seeing now.

Dan McQuillan:

No, I hear you and I absolutely put my hands up. I wasn't trying to say, "Oh, I covered it all," or something. None of that went away, what I was writing about. But yeah, and I agree with you. There is a kind of idiocracy aspect to this stuff. And I think, although I did write about infrastructure and climate change a bit in the book, particularly in respect to this idea of ecofascism, but what generative AI, I guess it demands the infrastructure and demands the energy, the water, the land and everything else, and the data so much more than let's say vanilla deep learning, that my own attention and reading of things has much more come through infrastructures and much more materialized, much more about the materiality of it as a way to really try to understand what's happening with AI.

And the reason I mentioned that is because some of the stuff that DOGE is doing is... Okay, there's two levels to this. One is large language models are stupid, they're just silly. They're fun as a kind of party amusement or very possibly as a sort of creative tool in some kind of niche, but I don't really buy that given their sort of fundamentally normative roots. It doesn't seem very hard to me on the basis of empirical performance and looking at what these things actually achieve, what they actually offer to say that the idea that these can not even just do anybody's job but really enhance anybody's job is just basically seems pretty dumb. Using them in a ridiculous way as a sort of smash and grab operation on government and degrading people who really have a lot of invested skill in maintaining things that have some kind of public purpose and just really running through that in a juvenile resentful and vengeful way seems a pretty good fit with the large language models themselves to me. They have the same character. Musk and LLMs seem to me to share a lot of DNA.

In broader terms, my reading at the moment of something like what's happening with AI if you look at it through an infrastructure lens, would be something much more like the ideas of a guy called Ernst Jünger who wrote most of his stuff in the 1920s and '30, and he was a veteran of the First World War. Very much a German nationalist, but he never went Nazi because I think he was basically a bit snobbish about the Nazi party. But he was a very, very patriotic, very nationalist, very... He was a racial supremacist as well, all that stuff, but it's more his philosophy. He had some insights and one of his insights was, as a guy who was fighting in the First World War, was that the First World War had transitioned from a kind of war as people previously understood it to a large-scale industrial project. He understood that at the end of the war, the shells with the barbed wire were all components in a giant machine.

And the weird thing about Ernst Jünger is this sort of almost like a sort of gonzo philosopher, "This is great." And the reason he said this is great is but also because he hated the times. He interpreted his own times as being a kind of ascendancy of the bourgeoisie, which he also particularly hated. He had essentially a Nietzschean view of the world. He believed in superior being, he felt the will to power was the main dynamic in the world. And he also adopted a kind of Nietzschean perspective of... This is where the nihilism I think is interesting because his nihilism was similar to some of the things it talked about as an active nihilism. So active nihilism is let's go harder faster. And the reason for that is because the end of everything under one condition, if you like, is seen as the beginning moment of transformation to the new world.

Now, all this might sound irrelevant, but I don't think it is because if you look at nihilistic accelerationism, you can see Musk's destructivism. But you can also see a lot of accelerationism, which is actually far more articulate or let's say ideologically coherent, forces that are behind both Silicon Valley thinking and quite a lot of far-right thinking at the moment. This is an accelerationist perspective. It's a burn everything to achieve this transformation. That is a very fascist idea. Fascism is not just a regular political ideology, it's not on a spectrum of you believe in co-ops and I believe in authoritarian rule by a single party leader who's word is law. It's a different kettle of fish, it's a different category, different order of things. It's more totalizing. It believes in the total transformation of the people, the total transformation of the planet, the total transformation through conflict as well, through violence, through destruction.

And this is, again, very resonant of Ernst Jünger's point of view, which is one of the reasons why the Nazis really loved him, although he didn't join up. And I would read a lot of my attempt at the moment to get a handle on what is happening, not just with Musk playing with stupid tech toys and using it as a reason to humiliate as many federal workers as possible, but the actual massive investment of billions of dollars of capital in creating the material infrastructures of this machine that is so patently not useful for anything useful has to be understood in broader terms. So we have to try and get a handle on it as a confluence of real political forces, however irrational they are. And I think that's another key thing. These are not rational forces. They are highly irrational forces and they are manifesting in our politics in general and in our commitment to this technology.

Justin Hendrix:

If the first part of this book is about laying out the problems and laying out the perspective on the relationship between AI and politics, in the second part, you say it's about something different, something called post-machine learning, new ways of thinking potentially about computing, new ways of thinking about how to do technology. You talk about feminist science, you talk about something you call post-normal AI, you talk about something called new materialism.

Take us through a little bit of this. How do we end up with post-machine learning? What should my listeners understand about this term and what it is you're aiming at?

Dan McQuillan:

To be fair to the potential reader or listener of the podcast, I would say the ‘building things back up again’ is about a third of the book, the last third. So just to be completely honest about it. And I think what I was trying to do in the book is really to try to... I Am taking AI seriously, I would say. I'm not dismissing it out of hand. I absolutely dispute the idea that it does any of the things that's claimed for it or that it's any kind of solution or any kind of future, but I do take it seriously in terms of its technical function, but I also take it seriously in terms of the mode of approaching the world that it instantiates. What is the kind of broader legitimation of AI, why does AI make sense to us in any way? And I try to read that again as a continuity, as what is AI built on in a way?

And it's built on a very positivist approach which claims that scientific …. And I would say actually that science itself... I'm pro-science, right? I did a PhD in experimental particle physics, but I'm pro-science as much as I recognize the limitations of both the scientific method and scientific worldview. And some of those limitations, unfortunately, support people who want to extend this kind of reductive positivism and the elevation of abstraction above all things into every aspect of life. And that's what I think AI's doing.

So I'm really just trying to flip the script and say, okay, to me, following this path has led to bad place. Let's think about what it would mean to completely invert that. It's not new thinking. I don't claim in any of the book to have conceived of anything incredibly new because to me that's not really the point. The point of what AI is doing is very much, it might be doing things in a particular new way with the technology that we haven't particularly seen before, but a lot of this stuff really isn't new.

And actually we have the conceptual framework to understand both what it is and why it's a problem. And the same thing with the opposite, that we could look at the way things could be done differently, understood differently, ordered differently, how relations could be different. We don't have to start from scratch, either. We have ways of understanding that. And in fact, these are things that are propagated and promoted by various collectivities and various groups of people over long periods of time. I do mention the Luddites in the book as well.

But in this case, the particular things you've mentioned, feminisms and new materialisms, the science, it's really saying AI knowingly or unknowingly, in the case of most of the practitioners completely unknowingly because they have very narrow understanding the world or anything as far I'm concerned, apart from the specific computations that they're put into practice. But knowingly or unknowingly is promoting a certain view of the world, a highly individualizing, highly reductive understanding of each other, apart from anything else.

So I wanted to say, look, there are other ways of understanding the world, but there are philosophies, and we all have a philosophy whether we like or not. In other words, fundamental ways of understanding how life makes sense and what the world consists of, there are philosophies that actually contest this and are contesting it right now for very good reason. Because they are essentially opposing the same kind of problematics that I'm complaining about with AI. The tendency for the cruelty and harm that I'm seeing come from AI are actually a lot of the same cruelties and harms that come from, example, from patriarchy. And feminists, philosophers have spent a long time obviously re-articulating how life could be and why that is a strangulatory framework. And they've also applied that to the ways of understanding the world that underpin AI. I draw only from the really powerful feminist and post-colonial critiques of science itself to say at the very least, there's limits this way of understanding the world. And if you follow it to its endpoint, you're going to end up in a very dystopian place.

And actually you can look at the world very differently. You can look at the world as this is where the new materialisms would come into it as well, look at the world as constitutionally relational. But we have a very particular way of understanding the world. We understand it as objects first and then their relations. And you can actually fit that the other way around and say actually... And this is something like process philosophy, you could say actually the relations are the foundational thing. And then you produce the idea of the particular entities, whether it's subjectivities like us or the particular things we understand as being real objects or whatever are highly constructed. It's all very constructed, it's all highly culturally shaped and these things [inaudible 00:34:43] come from a particular perspective on the world.

I use quite a lot of Karen Barad because... Partly because she was also a physicist and I find her ways of understanding the world really amenable, but also because she had a really deep handle of this stuff. I use the word apparatus to mean that kind of idea of it's a tech and it's rules and it's regulations and it's policies and it's political agreements and it's whatever. She also uses the word apparatus, but she means something deeper. She means something that is in itself productive of the things we understand to be the world that we live in. So where does the dividing line become between subject and object? And that's another thing again that Donna Haraway, a really trenchant critic of our hegemonic worldview said a lot of the really important things about life is what or who is an object. And my view on AI would be that it really acts as an engine of objectifying and reducing to objects a larger number of entities that I think should be considered as beings, a validity in their own right. Majorly people, but also other living beings.

In that chapter, Post-Machine Learning, I'm trying to combine the idea of at least the possibility of alternative epistemological approaches to the world, but the ones that have both practical application and real-world... things that fall out them in the real world. So the practical application being things like post-normal science, which is simply saying when you've got a problem that's so big, so critical and expansive that you're not going to be able to reduce it to a laboratory experiment, you're not going to be able to repeat run it, and you can't repeat run climate change in the laboratory and see what happens if we tweak something., These are out of scope of the self-defined methods of scientific approach per se. That doesn't mean we junk science, some current regimes we're trying to do. What it means is we expand it. We, for example, in that particular case, expand our definition of what the peer community is. The peer community is not just other scientists, it's people who have lived experience, it's people have a stake, it's communities themselves. And you broaden your idea of the scientific method. Yeah, I'm trying to make it practical.

Justin Hendrix:

That is, I assume, also what you mean by anti-fascist AI. You talk about this new apparatus that isn't trying to solve quote-unquote anything but to sustain the delivery of systems of care and social reproduction under changing conditions and ways that contribute to collective emancipation. Here I suppose on that, I wonder where you've got to with this anti-fascist approach. Where has your thinking gone since you wrote this book and since you've witnessed this reaction in the world to generative AI and in a way, I suppose people like Sam Altman and Elon Musk and others who make these promises, in many ways they said all the things out loud that perhaps you feared they might be saying in private.

Dan McQuillan:

I a hundred percent agree with you. The space for this kind of argument has shifted completely from being one that needed to be brought to attention to one that none of us can escape on a daily basis. The symptoms have become very florid, so they have to be dealt with. In the book, I think you cited one of the very early sentences or possibly the first sentence in the book, I think very shortly afterwards I would've probably said something like AI is a political issue, which at the time seemed like a novel thing to say. Everyone would be saying, "Okay," but actually, at the time, I was trying to say very much not just that it is a political issue, but that it needs a political response. And when I say a political response, I really mean in terms of counter-power, I don't mean in terms of a performative response from the same political structures that have sanctioned to prove are really just a different version of the same thing. Force things that can apply at least a breaking force to this stuff on the basis that they have their own constituent power of some kind.

An obvious example of that is something like a trade union or whatever, but I was trying to think openly about that kind of stuff or think flexibly about that kind of stuff. And again, really just by flipping the script of what it is that brings AI about, the interest that AI serves, the kind of world it orders. My recipe, if you like, in the book was workers' and peoples' councils. And that was meant to be a kind of general labeling for assemblies, collectivities that assembled around particular problems or particular constituencies. In that previous chapter, the one we were just talking about also tried to reference Paulo Freire and critical pedagogy, so looking for modes of organization that could apply situated, as Donna Haraway put it, situated knowledge, critical pedagogy to addressing not just the false solutions of AI but also the problems which this false solution was claiming to solve.

And I tried to articulate them because I was also asserting, I guess, in the book, that the mechanisms that we would normally rely on, rightly or wrongly, to regulate life for the general common good were whatever one thought about their normal operation not going to work in this context. And I think that's been validated in spades, really. So I was trying to speak to worker organization, again, cited various examples and community organization and I used historical examples of things like the Lucas Clan, which was really far-seeing foreshadowing of many good things from a group of workers who had a set of skills and said, you know what? We could apply these in a completely different way to the arms company that we're currently in, and we could really think freely about alternative technologies. They were thinking this at a time when the concept of alternative technology was coming up and the feminist movement was strong and the environment movement was beginning and they thought within that framework. And it's positive because actually it doesn't take that much. It just took people to come together, apply their skills and think differently.

It wasn't successful because of course it does take more, it takes some kind of political power to carry that through to some extent. And that's really what I was arguing for. So in the book, I'm just really arguing for this kind of self-constituting forms of social power that bring with them a techno political perspective, that understand that really politics and technology have never been separate. There's really no such thing as a separate politics and a separate technology. And in fact, neither of those things really exist separate from each other. Really we're talking about a terrain of techno-political struggle in which, let's say, counter-hegemonic forces or people that are trying to promote values that I would much more welcome and treasure as things like common good and flourishing for all have to find modes of organization that are consistent in bringing that about, and that also have some prospects of bringing that about.

And it's weird, actually, 'cause I wrote that book thinking, oh wow, I'm trying to spell it out here and I hope I'm articulating something, whether people agree it or not. And I got a lot of criticism from people saying, but okay, give us some detail, give us a plan kind of thing. And I was really just trying to write a general pattern. So where I've got to with that stuff really is I still think that's fair enough and whether it takes the form of trade unions or it takes the form of disability movements or whoever is the collectivities we're talking about, I still think that's fair enough. I mean, we need collectivities with a techno-political perspective. And I think I include in the book, if I remember rightly, the ideas of visual technology from Ivan Illich, which I again have tried to develop around these ideas.

And actually, it's pretty common for me to find that if I've got a kind of itch about, well, I need to fill that gap for myself for this picture to feel complete, I find myself going back to a lot of thinkers from the 1970s. I think that was a really interesting time when people were asking fundamental questions about what kinds of technologies should exist and who should get a say in that. Questions that were completely crushed, like many other things, by neoliberalism. That those questions has been erased and now come back to us as if they're an incredibly strange and this surreal idea that we should have social sanction over the technologies that are unleashed on society. Actually, of course, the technology has done a lot of work in opening us back up to those questions. People who look back even just on social media and its social impacts are asking that question, and now they're asking those questions on the back of AI, which is good.

And so where I've got to really is a bit influenced by this idea of engaging more with the materialities of it, engaging with the shocking global enclosures of AI's infrastructures, and I'm promoting a thing that I call decomputing.

Justin Hendrix:

I spent the last couple of weeks looking at a lot of satellite images of data centers and some of the enormous investments that are being made, of course, around the world. There are patterns in terms of how these facilities are being built, where they're being built, types of deals. Companies are striking with governments for resources, for water, for electricity, for land, for labor. I feel like I want to say all of that just as a prelude to asking what does decomputing encompass?

Dan McQuillan:

It's funny, you should talk about or put things in the way you did. And I remember because I was just rereading it and there are a piece I read about Labour's AI plans. You mentioned in a way that plan was the same because it included phrases I recall, setting up AI growth zones in order to give the industry access to land and power. You can't say it much more openly than that. These are special zones that are basically states of exception, which is another thing I talk about in the book, but this is a exception for the AI. Abandoning planning law, abandoning communities' ability to object to these things, and allow the seizure of land and power.

So decomputing is trying to do a kind of not hybrid thing but fuse these questions at the same time, the environmental and the social ones. To look at the underlying dynamics, which I would for example read the AI industry's own self-articulated obsession with scale as being a corollary of, and in a way a product of, growth. The idea of a growth as an overriding principle of our economies, a sacred cow, literally. A mantra that we can't possibly contest whatever else we're talking about.

Which, of course, people have, in my opinion, correctly identified as being one of the central flaws of our current systems that have brought us to the point of planetary and social collapse. It's like this idea of infinite growth or just simply growth. And AI has its own version, which is scaling, and that's the stuff that you're seeing on the satellite photos. As well as the stuff that we can't see, but we can certainly read about, the trillion-dollar valuations as companies or whatever it is. Yeah, so to some extent has a lot of parallels or even direct aspects of a degrowth movement.

They're not great terms because, obviously, the problem with degrowth is the good bits of degrowth are actually pro-something rather than de-something. What they're saying is we need a restructuring. We need a restructuring of our political economies in order to actually deliver a sustainable well-being. And so decomputing to some extent is applying that perspective to AI in particular, starting with AI. I think if we could work outward from there by putting a flag in the ground about AI being a particular starting point for actually progressive techno-politics and resisting AI being that starting point. And this form of resisting AI, this idea of decomputing having this aspect on the one hand of degrowth. And that's obviously not just an environmental thing or even an economic thing that has a lot of social, it also takes quite a strong line on what I would call de-automatization.

So that's really looking at the way our existing structures and modes of reasoning and decision-making and our overall social logics are already problematic in a way that allows them to be so easily intensified by AI, to be so easily, okay, centralized, further abstracted, weaponized, because of the ways we already organize our societies, they are already somewhat automatized. The decisions made about, let's say in the UK and the Department Work and Pensions, the way the decisions are made about somebody who needs the care and support from the state, those decisions are made in a very automatized way, in a way that lends itself to the kind of thoughtlessness that Hannah Arendt identified as being very cordial authoritarian thinking, authoritarian action. Much more prosaically. We look at the research from a couple of weeks ago from Microsoft themselves that said frequent and systemic use of generative AI in organizations leads to, I forget the words they use, but they were something like basically mental degeneration of people's decision-making capacity.

It's very clear that we're not in a position to rebuild a better world because we are so disempowered. We've lost agency, we've lost a lot of our understanding of what collective action means. Or it's still there, still very latent, that ideas of solidarity and mutual aid are very unpracticed, and these things need to be reclaimed and developed and built up before we can realistically engage in a larger-scale process of transformation. And I think AI is exactly toxic towards that. It's saying, don't leave any kind of decision-making to AI. Do not mediate social processes or orderings or balancing by means of these technologies. But more than that, we need to develop ways of making decisions and understanding the world that are developing these other individual collective and ecological capacities.

So basically, yeah, a bit of degrowth, a bit of de-automatization, and trying to provide an overall structure but also starting with the basics. Stop the data centers, that's part of it. Stop the data centers where they stand right now. It's definitely, no more hyper-scale AI data centers. That land, if anything, needs to be reclaimed as common land and turned into some kind of social value straight away.

The advantage of that is that what would be an advantage could be that actually, again, you've alluded to the idea that certain dynamics have, let's say, brought things to the fore, done us a favor of making the questions unavoidable. And I think AI infrastructure does that. Data centers are obviously and immediately harmful to the people around them, but they're also the platforms for other forms of obvious and immediate harm. They are anti-climate if you like. They are also anti-worker. The means by which people are degraded and made precarious, as you mentioned before that is quite a topic for Tech Policy Press is these are the platforms that deliver that precarity, and they are the platforms that will increasingly marginalize people and make it difficult to continue living.

So there is a potential that the simple idea of opposing data centers, but on a, not simply in my backyard, this big horrible building sort of way, but a broader sense of this is a physical instantiation of a particular politics or particular techno-politics which should be opposed. And out of that comes a range of positive steps such it has a very immediate thing. For me, all of this can be boiled down, for example, wherever you are, are you working in a university or in a commercial setting, a factory, whatever you do. You're a community worker, whatever it is. When somebody comes along, whoever it is, and it's usually going to be your boss and says, "I know, I've got a great idea that we can support your work 'cause we've got this great AI tool," that is a moment to say, "Okay, we've already got a problem here."

Something is rotten in the state of Denmark here. We need to first off stop that because that is really only going to make things worse. And secondly, clearly there are unanswered questions that we really need to get together and address right now about what it is that we are doing, what it is that we're engaged in. I could speak for higher education and say that is absolutely true. What AI has made completely obvious is the state of near collapse at the idea of learning per se that has been bowed by the massification, industrialization, financialization of higher education. That is not AI's fault, but AI is like the final straw and that's the same thing all over. So decomputing is a kind of thumbnail guide, a heuristic for saying if they're proposing this as an answer, we really need to resist, and we really need to talk about how to in our own small space and the bit where we actually have some agency, how we get more agency, and how we envision what a better world could look like. Just even in our little bit of it.

Justin Hendrix:

Dan McQuillan, author of Resisting AI: an Anti-Fascist Approach to Artificial Intelligence. Thank you for talking to me today.

Dan McQuillan:

You're very welcome. Thanks very much for having me.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & Inno...

Related

Anatomy of an AI Coup

Topics