Assessing Tech Platform Responses Following the Assassination of Charlie Kirk
Justin Hendrix / Sep 14, 2025Audio of this conversation is available via your favorite podcast service.
Charlie Kirk, a conservative activist and co-founder of Turning Point USA, died Wednesday after he was shot at an event at Utah Valley University. Kirk’s assassination was instantly broadcast to the world from multiple perspectives on social media platforms including TikTok, Instagram, YouTube and X. But in the hours and days that have followed, the video and various derivative versions of it have proliferated alongside an increasingly divisive debate over Kirk’s legacy, the possible motives of the assassin, and the political implications.
It is clear that, in some cases, the tech platforms are struggling to enforce their own content moderation rules, raising questions about their policies and investments in trust and safety, even as AI generated material plays a more significant role in the information ecosystem.
To learn more about these phenomena, I spoke to Wired senior correspondent Lauren Goode, who is covering this story.
What follows is a lightly edited transcript of the discussion.
Justin Hendrix:
Lauren, thank you for joining me. Your piece is titled, “Charlie Kirk Was Shot and Killed in a Post-Content Moderation World.” Can you just set the stage for us, give us some context for what you mean by ‘post-content moderation world’?
Lauren Goode:
I would say, honestly, we have probably been in a post-content moderation world for a while now, but it's with really jarring events like this that that becomes even more apparent. And what I was referring to in this case is the fact that many of us, myself included, but also folks I've heard from directly and researchers who have been tracking this, have seen the video of Charlie Kirk being fatally shot on social media platforms almost immediately upon opening some of the apps. And in many cases, without their consent. As soon as Charlie Kirk was shot, videos were shared to all of the well-known apps, TikTok, Instagram, X. the video spread very quickly. And it was really a moment that puts to the test some of the content moderation policies and actions of the biggest social media platforms in the world.
Justin Hendrix:
We've seen most of the platforms make various changes and, or draw down some of their efforts around trust and safety, and content moderation over the last several months or years even in some cases. How do you think that has affected where we're at at this particular moment?
Lauren Goode:
I think a few things. At least a few things are going on that have contributed to this shift in content moderation. One is that some of these tech companies see content moderation as a cost center. And when you're investing a lot of money in robust content moderation teams, and tools, and it's going really well, the upside isn't as obvious. But when you're spending a lot of money on it and it seems to please no one, then the incentive isn't necessarily there to keep investing in it. The second thing is that at least for Meta and perhaps X, some of the change in approach around what's allowable on the platform, I think is a reaction to broader politics. There have been some accusations, particularly from the right that these social media platforms have been censorious in recent years and some of this may be a reaction to that. Basically saying more of, okay, fine, we'll allow it.
The third is that content moderation really is a hard problem to solve. And with the Kirk video, there's almost nothing, I would say it's impossible to have stopped the initial distribution of this video. Almost everyone there I'm sure had a smartphone. That's what happens these days with horrific incidents and just other public events. People capture these videos and they immediately upload them to social media. But the amplification of the video, the algorithmic spread of it, the decision on the part of the platforms to autoplay thumbnails so that people see the video when they weren't expecting to. The lack of content warnings, if you've promised content warnings as part of your policy but you haven't been quick enough in actually applying them, all of those things do ultimately fall back on the platforms, in terms of responsibility.
Justin Hendrix:
I want to be clear that we're not just talking about simply the initial artifacts that people posted, but also now remixes. You mentioned slow motion versions, various other types of efforts that people have taken to do their form of some kind of forensic analysis of the video. Conspiracy theories being spread using those artifacts. What else have you observed out there as you've done this research?
Lauren Goode:
Yeah. That's a great point. I mean, there are the sheer numbers that some of these videos started to do almost immediately. I spoke to one researcher who was tracking this overnight from Europe, the night after Kirk was assassinated. And saw one video on TikTok reach more than 17 million views before it was ultimately taken down. One of my WIRED colleagues woke up on Thursday morning, opened Instagram and immediately saw a thumbnail of a video of Kirk being shot. And it wasn't just the thumbnail, but it was actually the video clip of it. And that had more than 15 million views at the time that we were reporting out our story. So, there were some videos that had just gone viral. And then there were other videos where people do start to, as it works on the internet these days, add context to the videos, start to have reactions, introduce their own theories to what happened, using it to spark a political conversation and take it in another direction.
That seems to be the sort of thing right now that is allowable to a point on platforms until it crosses the line into misinformation. I think that's just a part of being on the internet now and internet content that people share. I mean, certainly, we as news organizations will take footage of something, and then put it into the proper context, and strive to have the absolute most accurate information possible as a part of that added context. I think right now it's platform by platform. X will allow some graphic content, provided that it is not determined to be excessively gory on X. And that it's unclear how X interprets that exactly. I would certainly say I think this video falls into excessively gory. So far we've seen it quite a bit on X.
TikTok draws a little bit of a harder line around what it determines to be violent or gory content. Meta on the other hand will allow some graphic or violent content, but insist a content warning be applied to it. And then it becomes a matter of how quickly is that content warning actually applied. All the platforms are different. They all establish their own policies and enact it in different ways. Once you get into the territory of, okay, well, it's not just the footage, but people are starting to add their own context, then I think it becomes a question of, well, at what point does it then cross into misinformation or disinformation and has to be monitored in a different way?
Justin Hendrix:
It seems like this type of event really forces the platforms into such an unfortunate position. Their policies are never going to be perfect when it comes to managing this type of material. I'm reminded a little bit about the kind of immediate months after the full-scale invasion in Ukraine. There was a lot of debate about what type of video was appropriate to display, and what is people expressing themselves politically, and being able to, in some cases, advocate for their own interests or what have you, and what falls afoul of these types of policies. And I know that in the case of Meta in particular, they had to take some extraordinary measures there to think through how to apply their policies in that context. I don't want to make too much of a connection between wartime Ukraine and this act of political violence in the United States. But it does seem to really push us to the limits of where the platforms can reasonably be expected to handle the onslaught that comes from the users.
Lauren Goode:
Exactly. And I think the same could be said too for the conflict between Israel and Palestine, and some of the imagery that people have seen coming out of Gaza, and shared on social media as well. I think there are at least a few layers here. And I think that some of the tech companies are reacting in such a way that where they say, "Well, we're done apologizing. We're done trying to moderate all of this content. We're going to let things live freely on this platform. And it's not as much of our responsibility anymore." The sub factor of that too is that I think in general there's a lot of misunderstanding around what censorship actually means from a legal standpoint, what free speech means in terms of not having government tell you what you can or cannot say on a platform. Versus these companies in the private sector that are actually allowed to make their own policies around what they do and do not allow on their platforms.
The third part of this is that it is almost impossible to stop the distribution. Everyone has a smartphone and is recording the event themselves and immediately uploading it to social media. So, one of the researchers I spoke to made this point that it is almost impossible to stop the initial distribution of it. It's what happens beyond that point. What happens with the amplification of the video, what happens with algorithmic distribution of the video, and what happens once a video starts to butt up against the policies that they have in place, that it really does fall back, I think, on the responsibility of the platforms to handle that.
Justin Hendrix:
I think you're right to raise the Israel-Palestine example as well. I mean, it corresponds to my thought around Ukraine, that there are folks, of course, who are involved in those conflicts who would say, "Hey, this gruesome content artifact, I want as many people to see it as possible, because I want them to know what's happened here." And so, any effort to potentially limit that or to label it, or to otherwise gate it is effectively an act of censorship is the way it might be regarded. That seems to be the kind of conundrum that the platforms face.
Lauren Goode:
It absolutely is, right. There are a lot of people who believe that in order for a society to understand the full impact of an event, then you should be able to see the atrocities that are happening, that we become inured to terrible news if we don't actually see what has happened. And I certainly understand that argument. Meta in this instance has made a policy decision that the Kirk footage is allowable. The company's solution is, but we are going to apply a content warning to it. We're going to age gate it. And we're going to continue to monitor it, so that if it evolves or something else pops up that's related to it that violates our policies, we'll take care of it.
That's not necessarily the wrong approach, because I think people are saying, no, we need to feel the impact of this, and that means that the video's out there and we're going to see it. But then if you are going to enact those policies and say we are putting up those guardrails or we are gating it to 18 plus, or we're going to put the trigger warning on, then you have to do that. Then you have to actually follow through on that policy and ideally have the resources to do it.
Justin Hendrix:
Of course, this will raise all sorts of questions like those that have been raised around mass shootings, and events like at Christchurch, and the types of ideas that inform the creation of the Global Internet Forum to Counter Terrorism (GIFCT).
I want to ask you about one detail in your piece. You point out in particular that X's AI chatbot Grok was spreading misinformation about the event. But I might just ask more broadly, I mean, what has been the effect of AI, generative AI, AI overviews, AI responses and community notes, things of that nature? Are there other phenomena you'd point to that give us a sense of how AI is playing into this information moment?
Lauren Goode:
Right in this moment, the Grok example is I think the most salient. I went onto X the night of the Charlie Kirk shooting and saw that it was a trending topic. And when I clicked on it, it was a summary generated by Grok that was titled ‘Charlie Kirk dodges shooting at Utah Valley University amid birthday memes,’ and said that he narrowly escaped an apparent assassination attempt during the speaking event. A suspect was arrested, no injuries reported. Coincidentally, the incident overlapped with this YouTuber's birthday. So, the AI basically took these two events, unrelated that were happening on its platform, combined them and generated this completely false report that presumably millions of people on X were seeing. That's to me just an incredibly dangerous situation. And also, get your platform in line. What are you doing here?
And we certainly have seen errors over this time period too in the Google search overviews that come up now in search results. That's also a feature that's now available around the globe basically, so it's happening everywhere when you go to search. And the thing is, is that the companies that are making these AI products will say we're working on, it's getting better, the hallucinations are improving over time. In some cases they may even share data that suggests, okay, here's the error rate, it's improving, things are getting better. But it doesn't necessarily matter if they're getting better over time. If the one erroneous result you get from Grok or Google Gemini or ChatGPT or whatever it is something that seriously is dangerous, that puts people in harm's way or feeds them information that could lead to harm.
And to me, this is an instance of this. I assume that a lot of people on the internet are savvy, and saw that, and actually were aware, you know what? He was unfortunately pronounced dead, and that summary is not the case. I mean, I think AI, it's going to do miraculous things hopefully for our society. But in this moment, that was a major fail.
Justin Hendrix:
The open source researcher Eliot Higgins from Bellingcat pointed out in a post on Bluesky that in replies on X, the FBI's call for help, folks were attempting to do their own detective work, do their own forensics, and appear to be using generative AI to zoom in on certain aspects of the photo of the suspect. And in Eliot's post, he points out, of course, that there are subtle manipulations, hallucinations effectively in a lot of those images. So, yet another way that the AI is essentially being injected into this particular information moment. I don't know if you saw any other examples like that at remixes or other uses of artificial intelligence that seem notable.
Lauren Goode:
One thing I did try in the aftermath of the shooting was searching on Google Gemini and ChatGPT for information about Charlie Kirk himself as a notable figure. I didn't use a search term that was specific to the shooting. And I did note that both ChatGPT and Google AI search used the present tense still in describing Kirk, so hadn't yet been updated in its systems to include information that he was now deceased. And that was interesting to me, because it really underscored that the generative AI era that we're in now, where we're using chatbots that are trained on pre-existing datasets, these large language models, they often sort of have an endpoint. And we saw this in the early days of ChatGPT being released, too, where it would say, oh, I can't give you information on that because my dataset only goes back to early 2024 or something like that.
Now they've started to integrate more web search into these chatbots, so that it is pulling real-time information from the web. But ultimately, a lot of the generative AI is based on pre-created and pre-trained datasets. And so, it's not necessarily at this moment the best place to go for the most up-to-date information. And I'm going to certainly come across like I'm biased towards the media that we do, and we make, and the mediation that we do as professional trained journalists. But when you have a system where community notes on social platforms are crowdsourced, and so it's a lot of people who are weighing in, in a very heated moment with information that isn't necessarily accurate.
When you have AI chatbots that may not necessarily be up-to-date or are conflating different text-based bits of information and mashing them up into one. When you have conspiracy theorists online who are adding context to videos and the context doesn't actually make sense, then I do think you have to step back and you have to put yourself in a position where you say, okay, I'm going to go to a trusted news source in this moment, because that really is I think one of our last hopes for staying informed right now. And also gives you the necessary pause before you jump to any kind of assumption about what happened. And that's something we're certainly going to be keeping an eye on too, now that this has entered the next phase, where there is someone in custody who may be a potential alleged shooter.
And there has been misinformation floating about who that person might've been over the past 48 hours, too. And I think that's certainly the case now that there is a person in custody who is being investigated as being the possible shooter. And even in the days leading up to that, we have seen a lot of information being spread online. And now that there's a name out there, people are racing to get online, and figure out whether this person was extremely online, and was referencing certain memes. And I'm already seeing misinformation spreading about that. So, it's a moment to pause, I think, and not put so much faith in the platforms and products that aren't necessarily built to handle this moment.
We have yet to see what these kinds of videos, violent, politically charged or otherwise, them being distributed at such a mass scale, what that means for us collectively as a society. But in absence of any kind of content moderation that seems to have the well-being of the public in mind, we also have to find ways to sort of moderate ourselves.
Justin Hendrix:
Well, Lauren, there's going to be lots more to report on in the coming days. As you say, have entered that new phase. And as I'm recording this on Friday, that's just beginning. The name of the suspect has just been released this morning. I'm certain by the time folks hear this, enormous amounts of material related to that individual will have washed over them across these social media networks. Look forward to your reporting on this phenomenon in the next few days and weeks. Thank you very much.
Lauren Goode:
Thanks, Justin.
Authors
