Home

Donate
Podcast

How to Get Paid to Polarize on TikTok

Justin Hendrix / Feb 22, 2026

The Tech Policy Press podcast is available via your favorite podcast service.

Concerns about synthetic media and coordinated manipulation of online platforms have moved from theoretical worry to documented reality. Researchers, regulators, and civil society organizations are working to understand how algorithmically driven content recommendation systems can be exploited — not just by ideologically motivated actors, but by ordinary users pursuing financial gain.

Fundación Maldita.es is a Spanish nonprofit that has been working on information integrity and fact-checking since 2017. Its most recent investigation focuses on TikTok, and what they found raises pointed questions about the platform's creator monetization program. Researchers at Maldita documented a network of hundreds of accounts — spanning eighteen countries — that were producing AI-generated videos of protests that never happened, and doing so not out of any discernible political motive, but to accumulate followers, qualify for TikTok's revenue-sharing program, and, in some cases, sell the accounts outright.

In this episode, I’m joined by Maldita associate director for public policy Carlos Hernández-Echevarría and public policy officer Marina Sacristán.

What follows is a lightly edited transcript of the discussion.

AI-generated images of protests that never happened posted to TikTok. Source: Maldita

Carlos Hernández-Echevarría:

My name is Carlos Hernández-Echevarría, I am the associate director of Fundación Maldita.es based in Spain. I do the public policy work there, platform accountability, et cetera.

Marina Sacristán:

Hi, my name is Marina Sacristán, and I work as a public policy officer here in Maldita, and my background is in international relations.

Justin Hendrix:

Carlos, just quickly for our listeners, anybody that's not familiar with Maldita and what it gets up to you, just explain what's your line of work? What business are you in?

Carlos Hernández-Echevarría:

Maldita is a non-profit. It is our registered foundation. We have been working since 2017. Our mission is fighting for information integrity and against disinformation. We started basically doing fact checking, but quickly understood that we needed to do many other things including education, engineering, and particularly public policy in order to get the impact we were pursuing. And in that line, we are very much convinced that the role that the BDL platforms play on this information is so crucial that we need to be there and investigate them and see whether or not what they say and their legal obligations align with what they do and how they comply with loss.

Justin Hendrix:

So I'm excited to talk to you about this, a new report that you have out on TikTok and polarization. You state that TikTok is financing polarization in Europe and elsewhere, and we'll talk a little bit about what you mean by that. But Marina, can I just maybe ask you to explain your methodology and investigation process for this report?

Marina Sacristán:

Yes. So once we started with the investigation, we really didn't know the scale that it had. Well, we were looking to write on a different piece, on a different topic for TikTok. We came across these few examples of protests supposedly taking place in Spain, but we could detect that there were AI generated. So after that we found some other accounts with similar names in the profile of TikTok accounts and that were protesting Italy and Germany and Great Britain. So then we started realizing that this was not one single campaign, but more of a modus operandi, right? So there was something behind it and the reason why people were doing it. So we started trying to collect more of those to see what countries could have been impacted by this kind of content, which is AI generated protests that didn't happen or that were not real content of existing protests. So we started annotating all of them to see the impact and try to investigate what the reason could be for them to, I don't know, play along with this type of content.

Justin Hendrix:

Give me a little more detail on your investigative tactics. What tools do you use? How do you go about this?

Marina Sacristán:

It really depends on the investigation, but for us it's very important to have everything sort out and organized in a specific manner. So it is easier for us to assess, for example, coordination for this investigation. I think that we came across many different campaigns, many different individuals managing different accounts. But since we tried to collect all the accounts that we detected in a very systematic way, we could see these small details on usernames, on creation date, on name changes between the accounts that we were able to notice only because we had collected the ideas. So all these things gave us all the knowledge to assess coordination and the scale of age.

Justin Hendrix:

And as far as identifying AI-generated content, I mean the forensics is difficult, hard to tell often what is AI-generated, what is not. It's always a cat and mouse game between creators and forensic tools. How do you go about making those types of assessments?

Marina Sacristán:

So this is a very specific case for this investigation because some of the creators, let's call them, they didn't really care that you knew it was AI, so they were just putting out the content. So sometimes they will leave the AI watermarks for Sora or for Gemini's creation generator of videos. So that was pretty obvious just to look at the watermarks. Sometimes even TikTok had those labels marking that it was AI, but this becomes a problem later. So even if people looking at the video in the platform know it's AI, we also have people taking that content and posting it again on a different platform and that label is missing now.

But yeah, the assessment was pretty easy. Let's say for it to be AI, sometimes we could even see details of that watermark being blurred on purpose. So you could see in a corner that there was something moving differently where the sort of watermark would exist. For example, we still saw a lot of mistakes of the AI. So one arm and going through the girl's person that was speaking or details like that. But we also used some softwares which are not perfect, but at least they could give us some indication that in fact, and by the end of the investigation, after 5,000 videos, I think we are very well-trained now to know even the colors of the type of content or the movement that they usually generate.

Justin Hendrix:

Talk to me a little bit about scale in terms of what you were able to uncover. You've got lots of connected accounts that appear to be coordinating in some way. What was the scale in terms of what you were able to patch together?

Marina Sacristán:

This was a collection of around two months that we dedicated to looking for new accounts and we ended up with 550 accounts that we're targeting, let's say. They were focused on 18 different countries. So the scale was pretty big because they were producing videos at scale. We collected over 5,080 videos that all had the same characteristics. While we were doing this, we also came across different types of content, but for this investigation we wanted to have two basic characteristics. So AI generated content and videos that had to do with protests because we also came across videos of snow in Russia in amounts that they were not possible. Also kids in Gaza in Palestine, which was also civic discourse content in a sense, but not in a protest way. So we also came across different types of content, but those are the two characteristics we were looking for. But yeah, the scale was big, but at the same time it was not an exhaustive investigation. It was not our intention to gather all the accounts they were doing this just to publicly report an issue that the platform has.

Justin Hendrix:

Let's talk a little bit about the business model. I mean, I think phenomena similar to this have been observed across the world, across many different social networks. I'm thinking even about reporting in particular by, I remember Karen Hao and MIT Technology Review years ago looking at the way that creators were taking advantage of conflict driven videos and political content to exploit Facebook's advertising platform. What did you observe here about how these creators are taking advantage of TikTok's creators program?

Carlos Hernández-Echevarría:

Well, the most interesting part for me in the investigation was precisely the role business models play on this beyond the fact that there is so much political disinformation that just couldn't exist without this economic incentive by the platform. I think for me, having a look at the industry that has been created just because these creator power programs that don't have the safeguards in place that they needed to have, it was amazing because Marina can describe in detail how these people, they weren't shy about volunteering information to us in the sense of this is how I do, I use several email accounts that I open, and in that way if I use a VPN, I can use Sora without restrictions in Canada and then I can grow an account to this point. Then if my VPN is pointing to a UK server, I can get them monetized.

So they were very clear, first of all that they weren't into this kind of content per se, it's just that they found that it's the content that TikTok's algorithm wants and rewards and then that they were able to run the same scheme time and time again. Many of these people were probably running not only several accounts, but also some of them with an eye of selling them online, some others just to keep producing content to be monetized. So it's such a clear indicator of a problem that would never occur if not for the failed policies of a platform. So for me, it's a great, great, great demonstration of how platform policies, when they're not applied, they can not only not solve the problems we have been pointing at for years, but actually create new vulnerabilities and make our democracies less safe just because they are funding the same things that they're supposed to be fighting against.

Justin Hendrix:

So we've got a group of people who are not necessarily ideologically motivated, they're financially motivated, they're sharing content across the political spectrum, don't appear to have a side per se, simply interested in sharing content material, simply interested in sharing conflict material and benefiting from it. Talk to me a little bit about, you mentioned VPNs. I mean this idea of bypassing geographic restrictions on tools on the creator's program itself. What's that about?

Marina Sacristán:

So one of the creators that we talked to that was behind several of these accounts, so around six of them, he explained the whole process to us because he was interested in us collaborating with him. But what he said that in the first place he usually uses a VPN only to be able to use Sora. So he's located through a VPN in Canada only to be able to generate those videos, and then he creates accounts in those countries that are part of the monetizing programming TikTok, right? So it's only United Kingdom, the US, Mexico, France, Germany, South Korea, and I might be missing one of them, but it's not many countries. So you really need to create an account that is based in one of those countries to be able to monetize after you reach the threshold of 10K followers.

So we know that something is happening there, even if the program is only located in a few countries, they are being able to access the program anyhow. And it's not only those two characteristics that you are located in one of those countries and the user pass the 10K followers threshold. You also need to not be against community guidelines so you don't have one violation. And that is also something that we think they are doing because they are producing AI generated content that affects public discourse, which is against the community guidelines. So if the enforcement of that specific line in the community guidelines was actually taking place, they wouldn't even be able to monetize the content afterwards.

Justin Hendrix:

And did you have any interaction with these creators?

Marina Sacristán:

Yes, because as you said, we noticed that the point of making those videos was not ideological or otherwise they were not very successful in it because of how much they used to change the topic, go from one thing to other, but also not even political content. Sometimes they would just start posting videos of kittens, then sidekicks and then all of the sudden protests. So we were thinking the agenda for these people is pretty weird, so that's why we noticed that it must be something else that was happening. After we collected many of them, we saw some of them that in their bios stated, "If you want to buy a US based TikTok account, DM me," or "Please follow me. My dream is to reach 10K followers," which is as we know, the threshold that TikToks requires to join their program.

So that's when we noticed that the point was not ideological or political, but it was to make money. So we reached some of the accounts that had in the selling accounts announcement and they of course told us that they were in fact selling different accounts in different locations, but also that they were already monetizing. So it is more valuable, let's say, to sell an account that has good engagement metrics that has a lot of followers and even on top of that is already part of the monetization program. But we also came across was this one person that I already mentioned that he willingly told us the whole process from the creation of TikTok accounts to generating AI content about the news because that's what he said worked best or gaining a lot of views and a lot of engagement very quick to reach a lot of followers very quickly, right? And then he said what topics he thinks he thought that were working and which ones were not, for example. So that was very interesting and that was a very valuable testimony that we had from him.

Justin Hendrix:

Carlos, you already mentioned some of the aspects of the policy context on TikTok itself, the extent to which some of this behavior might offend TikTok's own guidelines. Talk a little more about that and then I want to ask you also of course about the policy context in the European Union under the Digital Services Act and what the implications for this type of behavior could be.

Carlos Hernández-Echevarría:

Yeah, I think it's very interesting and it's something that it happens to me particularly on TikTok because I need to follow the platform policies about misinformation very closely. And I find that they do have great policies that they keep updated and that they are generally strong to deal with the kind of challenges they have on their platform. The problem being that they are seldom enforced or they don't have the capacity or the will or I don't want to characterize that, but the policies are not the problem in this case. I think TikTok has made the right call in making sure that they don't have kind of a blanket policy against illegitimate content, but draw a clear distinction saying that duties in issues that are affecting public opinion in a significant way and that coverage of current events need to be authentic in that sense. And I think it's very important that as these tools become more and more powerful, this is the right policy to follow.

If you have a policy against generative AI content that might look like it's being genuine and you are using in this area of public interest, then that's the right way to go. But then you need to make sure that you are able to actually spot that content or at least respond when they have this massive number of views that we observe here. I think the Digital Services Act in that sense, it provided a very strong notice and action system for illegal content under Article 16. But I think it's also, since it was conceived this European framework, it was already clear that it needed to have this other part of the effort that had to do with making sure that the terms and conditions for the platforms, the internal rules were if they existed, they were available and they were actually enforced. So on that Article 20 of the DSA is pretty clear in allowing users to file complaints but also establishes an obligation for the platform when the rules haven't been followed to provide remedial act.

And I think that is very important because if you look at the article in particular, it doesn't only say when a platform was supposed to delete an account and did not, but it also says the other way around. So it's not only about protecting users when there is over-enforcement of the policies, also when there is lack of enforcement, which is what we are discussing here. So in the EU regulatory framework, I understand how DSA enforcement so far has been focusing on more like, I don't want to say low-hanging fruit, but more crystal clear cases around the legal content and their patterns.

But I think the time is going to come for the regulators to make sure whatever promise these platforms make to their users is actually fulfilled. I think this is going to be also an interesting area to follow because I think as these platforms are forced more and more to confront the fact that they're not following their own rules, they're going to be starting to relax their own rules and make some of them disappear, which will be an interesting side effect of this whole debate about regulation. But clearly in this case the policy is good, it's just that there is this massive hole in enforcement that it's hard to think how the volume this kind of practice has acquired how this hasn't been spot by the platform.

Justin Hendrix:

As you say, we just saw a set of preliminary findings against TikTok in particular around some of the allegedly addictive aspects of how the platform operates. And it is certainly, I'm sure, much more complicated to look at these sort of systemic risk questions, things like political polarization or disinformation. These are obviously much more difficult to regulate and there are all sorts of speech implications and other questions here. I don't know. I mean I'm wondering, I guess I'll ask this. The DSA also contains within it some mechanisms for providing researchers like you with better access to platform data. Would that have mattered here if for instance, the next step is to put in a data request to TikTok to be able to study this problem at greater scale?

Carlos Hernández-Echevarría:

I think it could mean the world. It could make our, not only our lives easier, like Marina and mine, it's about the information that is available over here. For example, we are assuming or we are being told by these actors that they need to reach this threshold to monetize on TikTok when there is no real solid reason for TikTok not to be able to tell researchers this account is monetized or not. It doesn't have to involve the specifics about payments, though it could be good. I mean there are many ways to do it, but for example, on the side of monetization, really on any side, but on monetization, there is so little data that platforms are being opening to the researchers that there is this huge driver for the production and dissemination and disinformation that it's very much understudied because platforms are actively blocking it.

So yeah, definitely the provisions that have to do with that access in the DSA are fundamental for all the other pieces to come into place and to study systemic risks in any meaningful way because if not, you end up in a situation like we are seeing now, which is that right now for example in the US and this very well, there is part of the litigation against platforms regarding online harms have to do with the fact that their internal teams have been researching on some of these ugly side its own company and they have put probably the findings in front of management and management have ignored them. But we are already seeing signs from many of these platforms in which probably the top echelon is now saying maybe let's not investigate ourselves in this way because that's going to create a paper trail that is going to create some problems in the future.

So either we unblock, particularly in the European Union regulatory space, either we unblock the possibilities that DSA brings for research in terms of that access, or we are going to be flying blind really quickly and we need definitive answers on these kind of things. Like how many people saw this stuff is interesting data, but whether or not there was algorithmic amplification and there was monetization for real, that is probably the smoking gun in ascertaining whether or not the platform beers legal responsibility. And I think we have gotten a smart at finding ways to prove that, that don't rely on the platform's data, but it would be much easier and much more straightforward if we would have that kind of data from them, which are the ones who have it.

Justin Hendrix:

Clearly you're doing this business all day long every day. Are you looking at other creators programs, other monetization programs at other platforms?

Carlos Hernández-Echevarría:

Yeah, we have been looking at not only monetization, but also I could say the financial aspect of it in a broader way. So not only creator programs, but also how advertising its use in this context and how, for example, some channels are providing their own banners so you can basically people them or strive them money. So yeah, we're looking at the number of things we used to have good or the best data available on meta. I think we're all aware of the work by What to Fix, this organization that has been tracking this issue of creator programs for quite some time. But we are in the dark about so many other platforms and the data monetization on YouTube for example, which is for me, one of the most crucial vectors of this information in many areas. We are assuming, for example, monetization there because we are able to track whether or not the videos have ads and if they have ads, you can basically safely assume that the user is engaged in some kind of revenue sharing program with YouTube.

But as the creator programs grow more and more complicated, it's going to be difficult to have meaningful research on this, if not by the regulators making sure this data is accessible. And I also really don't see any argument against this on privacy or commercial secrets or that because I think there is already about decades of precedent that say that commercial communications can and normally work in a separate legal environment from the normal freedom of speech of users. And when money is involved to promote particular abuse, users are entitled to a higher degree of transparency of whom, who pays how much, all that. So I think this should be one these things that are priority one for regulators everywhere.

Justin Hendrix:

I suppose my last question is have you heard any response to this report either from TikTok, from regulators, or from other researchers in the field?

Carlos Hernández-Echevarría:

Yeah, we are super encouraged about all the colleagues and researchers who have reached out, but particularly about the regulators, who want to know more about this because I think something that they have found very interesting in this investigation is yeah, that particular notion that many of the things that we have been suspecting for long which had to do is there a link between business model and this kind of content that it's per se polarizing, it creates problem in public life or that I think this is one of the clearest links that we have seen so far on this particular platform. So yeah, I am very much encouraged and I know the timings of regulators are quite different from independent researchers like ourselves, but to be fully honest, I don't think we have ever had such a strong response as we have had our last investigations, particularly on TikTok policies on this matter.

Marina Sacristán:

I think that is also because you mentioned before the articles in the DSA about systemic risks are very vague, probably on purpose because they have to cover a lot of things that sometimes we don't even know yet what they are going to mean. And this example specifically is very obvious. Let's say it's money going from a platform to an individual which is going against the community guidelines. So I think this is why this investigation specifically in this type of scheme that doesn't only affect protests but other types of content that becomes disinformation. It's very interesting for DSA enforcement too, so that to make money on the side of creator programs or ads, it's something that people are interested to do cut and platforms are not very interesting to hear about.

Justin Hendrix:

Well, I appreciate you two taking the time to walk me through this research and I hope we'll have the chance to talk about additional work in the future.

Carlos Hernández-Echevarría:

Thank you very much, Justin. Thanks for providing not only us, but all of us this space to discuss these things that matter. I think on that Tech Policy Press is such a great place.

Marina Sacristán:

Thank you very much.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President of Business Development & In...

Related

Perspective
Beyond Content: Why Monetization Governance Is The Next Frontier Of Tech PolicyApril 28, 2025
Analysis
Unpacking TikTok’s Data Access IllusionJune 12, 2025

Topics