How to Regulate Deepfake Financial Fraud
Justin Hendrix / Mar 13, 2026Audio of this conversation is available via your favorite podcast service.
This week, Meta announced the results of its second joint enforcement operation with the Royal Thai Police, the FBI, and the DOJ Scam Center Strike Force, along with law enforcement agencies from the UK, Canada, Australia, Singapore, Japan, Korea, and several other nations. The operation targeted criminal scam networks operating out of Southeast Asia, in countries like Cambodia, Myanmar, and Laos, running what amount to full-scale industrial business operations.
Based on intelligence shared by law enforcement partners, Meta disabled over 150,000 accounts associated with those networks, and the Royal Thai Police made 21 arrests.
It was, by any measure, a significant coordinated action. And yet it is hard to tell if it will have a significant impact against the scale of what's actually happening. Online fraud has become one of the fastest-growing criminal enterprises on the planet. Deepfake fraud cases are surging, and Deloitte analysts project that generative AI-driven banking fraud alone could climb to roughly as much as $40 billion in the US alone by 2027.
A new report on deepfake financial fraud from Data & Society maps this threat and the potential for global regulatory reforms that could turn things around. I spoke to the authors to dig into their findings:
- Alice Marwick, director of research at Data & Society, and
- Anya Schiffrin, co-director of the tech policy and innovation concentration at Columbia University’s School of International and Public Affairs.
What follows is a lightly edited transcript of the discussion.

Fake US hundred dollar bills and debris is pictured on the floor of an abandoned scam center in O'Smach town on the Thai-Cambodian border on March 12, 2026, during a press trip organized by Thailand's Ministry of Foreign Affairs and the Royal Thai Army. (Photo by Lillian SUWANRUMPHA / AFP via Getty Images)
Justin Hendrix:
I am pleased to talk to two of you today and should disclose I've known the both of you for quite some time, have worked with you both in the past. Alice, most recently on a series that we did with Data and Society on tech and power, tech and democracy. And Anya, we have had the pleasure of teaching together, working together on a few projects in the past on tech and democracy. So, really excited to have you here today and excited that you're collaborating. We're going to talk a little bit about this report you've just put out on deepfake financial fraud and global regulation. I want to ask you just to start, how did this come together? How did the two of you get joined up on this particular project?
Anya Schiffrin:
So, Justin, it's kind of a wonderful story because it starts with our class. You brought Craig Silverman to class a couple of years ago, and he talked about his reporting on financial fraud and scams and the role that gift cards at Walmart were playing. And it was before the 2024 elections. And he said he felt that these scams were really damaging because they were creating distrust and a feeling that the system was rigged. And I thought that was fascinating and kind of kept that in the back of my mind. And then I was at a conference with Audrey Tang from Taiwan, the former... I think she was the [Minister of Digital Affairs].
And she said that in Taiwan, they had found some solutions, that they had made the platforms liable for some of these deepfakes. I think they had some new law that they had to do far more verification about the advertising and they had to take things down if they were warned. And I wasn't sure if it was true, but I thought it was really interesting. And I had a Taiwanese student in my class and I asked her to look into this. And then of course we started learning more and more about the role that AI deepfakes play and how people get so confused and they fall for these scams. And I corralled a team of students together to research this.
And then I had breakfast with Janet Haven and I said, "Would you be interested?" And she said, "We'd really like to publish the report." So, she introduced me to Alice. So, of course, I'd seen Alice's name all over the place on lots of documents and in book acknowledgements who I'd never met before. And so, Alice and I started working together on this, but we've only met in person once or twice. And obviously, the report's coming out in the next day or two. And it's been a really, really interesting journey. I'll just say one more thing. We gathered regulators and scholars together in Malaysia, and we learned a lot about the online fraud problem and scams in that part of the world.
And then Taylor Owen and some colleagues gathered people in Ottawa. So, we did regional consultations and we're actually doing a lot more now to learn how this problem manifests itself in different parts of the world.
Alice Marwick:
The reason Janet was so receptive was that we had put out a report last year with my co-author, Lana Swartz, called ScamGPT, How AI Supercharges Fraud. It was basically a lit review, synthesized a bunch of literature about how AI is changing the scam ecosystem and from a very sociotechnical perspective. And then at the end of the report, we had a couple of recommendations, but I hadn't done any research into what different countries or regions were doing to prevent scams.
And so, when Anya brought this project to me, which was already very much in progress by the time I got involved, I realized that this was really a great way for me to expand my knowledge because Anya has been all over the world talking to people from enormous countries to small groups of banking executives. And we've really been able to map out the different ways that different places are tackling what is a really significant problem. And it's just one part of the scam ecosystem. We call it deepfake financial fraud, which is when you have a video of somebody, often Elon Musk, who's touting a fake cryptocurrency scheme and people end up sending money to these scammers, but it's not actually Elon Musk and the video is generated by AI.
Anya Schiffrin:
That's right. And Alice's report for Data and Society was kind of one of the first big reports on the topic. And so, precisely we thought, well, let's figure out what other governments are doing to fix this problem and what are the different... I love a good taxonomy. How can we categorize these solutions and what could we do in the US? And I think one of the things we learned from the regional consultations is that, well, two things. One is that each part of the world has a slightly different twist. So, in, for example, Southeast Asia, the whole problem of human trafficking and people being trafficked to scam compounds in places like Cambodia and Myanmar is a huge problem.
So, for the Indonesian government, it's partly a labor and a huge trafficking problem. And then the other thing that we really learned is the whole sort of scam supply chain. So, just as Alice is saying, there's people making the scams, there's people delivering the scams. Obviously we'll talk, I'm sure, more about Meta and how much money they make from the advertising. There's a point where the money gets taken from you and often the banks are involved in that. Or even earlier, the telecom companies, then there's the platforms. And then there's what happens to the money afterwards? Does it get laundered? Does it get put into casinos? Does it get put into real estate?
So, there's this whole sort of pipeline of scams. And the more we talk to people, the more we realize that there's different things you can do at different points in the scam pipeline as well.
Justin Hendrix:
I don't think my listeners need to be told that this is a growing problem. I mean, we see it all around us. There are headlines every day. I have a sense that we're really at the very beginning of this as a global threat, and particularly in this era of agents and agentic AI, we'll see lots of creativity over the next few months in terms of how people can build on what's already there, find extraordinary amounts of detailed, fine-grained information on people on which to conduct their scams. But I want to get a little into this scam ecosystem just a little more specifically. Anya, you've already started to draw the picture.
Let's just talk about each part. Let's start maybe up top with the platforms and online advertising and search. What does this piece of it look like?
Alice Marwick:
So, this is how people are aware of scams in the first place. They're often advertised to them. They look like any other advertisement on a social platform feed. So, last year, the Tech Transparency Project and then Reuters did amazing reporting on the amount of money that Meta was making on scam advertisements. And these ads are not necessarily particularly sophisticated. One of the most frequently clicked on ads was a ad for McCormicks, the spice company that makes spices with a red cap. Probably half the people listening to this have a McCormick spice in their spice drawer. And there was a scam ad saying, "If you send us $20, we'll send you this full spice rack full of every McCormick spice."
And the amount of money people lost to this was such that McCormicks had to put up a page on their website saying that this was not a real opportunity. So, it doesn't have to be particularly sophisticated advertising, but this is getting served to people on their Facebook feed in the same way that advertisements from all sorts of legitimate companies are. So, in many cases, the platforms are the ones who are connecting the victims with the scammers. In other cases, they are meeting people on social platforms. This is where you get into the romance scams and the first few interactions that people have is through a social platform's messaging system.
Anya Schiffrin:
So, I think that we're taking a very common sense approach. So, when you have a huge problem, you have to think where the most efficient solution, right? What is the low-hanging fruit? And it seems to me very clear that the platforms really are people that need to do something. And so, I started looking, and it turns out that in a lot of fields in law, in operations research, there's actually theories about this. And the sort of classic tort law paper by Guido Calabresi, the former Dean of Yale Law School, was this idea of the cheapest cost avoider. And I think that was originally looking at things like car accidents, but I thought, actually, that really applies to this situation.
Facebook is an enormous choke point for this problem. They're the people who are doing the, which what Alice has written about, the distribution at scale. So, it really makes sense to target them. And I've heard you, Justin, over the years in different panels talk about product liability and how we can apply that to the problem of platforms and miss and disinformation. And there's so many things you've said that are kind of echoed in my mind. And I started looking at this and found that sure enough, there's a legal scholar at NYU called Catherine Sharkey, who's pointed out that the digital economy has actually necessitated a shift in how we think about liability.
And if you think that when there's a product that doesn't function, Amazon is in the best position to not sell that product. They're the ones with the marketplace. They're the ones that are supposedly vetting their sellers. It's really Amazon. So, I would say that having looked at this, I think a key, key point of intervention is Facebook. They're making money off these ads, they're distributing these ads, and they're actually in a position to do something about it. So, I think it's really important to think about Facebook as kind of the cheapest cost avoider in this situation and to really push them to do far more to stop this problem.
Justin Hendrix:
You don't leave the telecoms out. You also talk about telecom infrastructure. A lot of this is through AI-driven SMS and other forms of messaging. What can the telecoms do? What role do they play in this?
Anya Schiffrin:
One thing they are doing in lots of countries is requiring ID to get SIM cards and SMS cards. And I'm not sure if it's working, but many places, including I think Mexico, Malaysia, and other countries do have regulations. I don't feel I know enough to say whether that's a solution, but it's certainly something that people are trying.
Alice Marwick:
Yeah, it's a really interesting problem because we know that from a lot of scam compounds, the first contact they're having with people is these spammy text messages, right? And some of them are fake jobs, right? I get text messages all the time saying, "I have this amazing job opportunity." Some of them are pig butchering scams, where there's a very innocuous sort of first contact with somebody. And then when you get into a conversation with somebody, they'll start building emotional intimacy and eventually, they'll start taking money from the victim. And some of them are things like the DMV scam, which many of us have probably gotten where you get a text message saying that you owe money to the DMV.
Now, many of these follow a really clear pattern. The DMV one, I think is a very good example. I don't think the DMV does generally collect revenue through text messages. And it seems to me that there have to be ways to do what Apple recently did with its text messaging inbox, which is filter some of these messages out so people don't even see them. And I think that's something where we could ask for more innovation from the part of a cell phone provider or a telco network provider. Because often when you're trying to intervene at the point where people are giving money to a scammer, you're too far along in the process, what you need to do is prevent those interactions from ever taking place.
And so, that's where we're talking about the social platforms and the telcos as the place where the victim and the scammer are actually interacting. If we can decrease those interactions by even 30%, 50%, it could have an enormous impact on the amount of money people are losing worldwide to these scams.
Justin Hendrix:
If there are free expression folks listening to this, they'll be concerned about some of the solutions around having to register for SIMs or things where people might have to ensure their identity, but that looks like it's a fruitful place to perhaps look to maybe think about how there can be both privacy and free expression preserving solutions here, even as we try to figure out these telecom related issues. With financial institutions, you talk about the fact that they play a central role that scams ultimately rely on formal payment systems. What can the banks do? And are we anywhere in the world making progress on this particular front?
Anya Schiffrin:
Justin, your point about that sort of trade-off between privacy and free expression, I think is something that we see with all of these online harms and different countries and regions are obviously dealing with it in different ways. And that's why we often wind up with the sort of low-hanging fruit of know your customer and transparency. So, I think what banks are doing different things in different places. I mean, one thing is clearly bankers are really, really upset about this. And I think what's interesting about working on this problem is everybody's upset about it.
It's the kind of thing that if you sit next to anybody at lunch or dinner or meet anybody, everybody knows someone who's been scammed and everybody's worried about it. So, I think one thing banks are doing a lot of is more public education. So, I don't know about you, but my bank is sending me emails constantly telling me to be aware of scams. And when I try to pay someone, even I know they just sort of say, "Sorry, no, you can't do it, or get a verification code." So, banks are definitely doing a lot more public education, and they are also in many places trying to get together and share data because a lot of times, it's hard to pick up what is an abnormal transaction and what is not.
And banks, as you say, are key because often the money might go into crypto or later gets laundered. They can pick up a lot of things, but they can't pick up everything. So, the sharing data is really important. And then the anti-money laundering laws are incredibly important as well. And some of the prosecutions that have happened in places like Singapore have been over that. I think Singapore now has a shared liability framework where I think customers are expected to practice good digital hygiene and the banks are as well. So, they're kind of making everyone responsible.
But I think where I landed, and probably Alice also, is that you can expect individuals to do some things, but really not everything that the whole point of these scams, especially the AI deepfakes, is that they're designed to trick you and often they catch you. It's amazing, by the way, the number of experts in deepfakes who have been scammed, including household names, people that you know, because they get you in a moment of panic or they create a sense of urgency. And then of course, I have a friend whose mother couldn't even really send an email, but somehow she managed with the help of the scammers to go to an ATM and buy Bitcoin and send it off to these people.
And when her kids told her, absolutely not. She didn't believe her children. So, anyway, yeah, so I don't think individual responsibility is going to be enough. I think the banks are trying to do more, but there's obviously much more they can do.
Alice Marwick:
I think the difficulty is that this is a situation in which people are often voluntarily taking money out from the bank, or they're voluntarily sending it to somebody. And what Anya's talking about here are these points of friction that different financial institutions are trying to introduce. So, if you go to the grocery store sometime or Target, sometimes you'll see a sign saying, "Do not buy gift cards for other people." That's a very simple point of friction. It's just making people think twice before they do something. But like Anya said, scammers take advantage of the fact that when you're in a state of heightened emotions, it often overrides the logical part of your brain.
And so, you find yourself doing things that if you took a step back and took a deep breath, you might think we're ridiculous. So, it's about trying to introduce that friction into the system and getting people to take a step back. But the other problem is crypto, right? Crypto is really behind an enormous amount of organized crime all over the place. And it's because once the money goes into crypto or once somebody wire transfers money from one place to another, there aren't the protections that you'd get with a credit card where you can charge back or a bank where the bank can say, "Oh, this was a fraudulent use of your card."
If you are voluntarily doing these things, then it adds another layer of complication. And again, I think that's why the best point of intervention is to get people before they're even transferring money to begin with, because especially once it's in crypto, it's gone. You're not going to see it again.
Justin Hendrix:
You outlined the role of organized crime, and you've already mentioned some of the tactics, the coerced labor, various other kinds of ways that criminals around the world are coordinating on this and some of the troubles that come along with that, the kind of cross-jurisdictional issues that make it hard to prosecute. I want to flip straight into the regulatory responses. And I want to ask you first, is there a place in the world that you look to and you say, "This is the Vanguard, this is the regulator or set of regulators that are doing the most that seem to be furthest ahead in terms of contending with this issue"?
Alice Marwick:
I mean, the difficulty is, like you said, Justin, there is a privacy trade-off. And in a lot of the countries that we have looked at that are doing really forward-thinking governance like Singapore, that's a very different political system than the United States, and it's also a much smaller country. So, things that might work really well in another context aren't going to work in the United States. And so, it's interesting to see how we're also seeing innovation on the state level in the United States. So, there's other countries who have done things like give people copyright over their likeness.
So, if there is a deepfake of Justin Hendrix and it's being used to defraud people out of their money, Justin has the right to sue or has a right to take civil action against the people who are using that or at least get the content taken down. And we're also seeing that at the state level. There's states in the United States who are starting to implement things like that. But the other thing that we're seeing is the UK and the EU in general have promoted this duty of care framework for social platforms, which has been written about the most in terms of child safety, but also applies to scams and frauds where the platforms are responsible for keeping a safe environment for their users.
Now, I have very mixed feelings about that when it comes to harm to young people because I think the way that harm is defined can be very nebulous and can be used to remove ideas that people don't like. I think it's kind of paternalistic, but we're also seeing different states in the US experiment with this kind of legislation around duty of care. So, because this is such a large problem, it's one of those things that's going to require a lot of different kinds of solutions.
Anya Schiffrin:
All of those things are really important. And then I was going to say, I feel like sometimes small countries are better able to get everybody around the table. Singapore, Australia, Malaysia, Indonesia, they're all talking to each other about this and doing more sharing. And they have a Frontier Plus and they have cross-border collaborative agreements. And the same with Taiwan. I think in a smaller place, you can actually get people in the room to agree to work together. When we had our meeting in Malaysia, we had the police department came and the Prime Minister's office came and the Central Bank came and civil society came.
Whereas I do think, especially in the regulatory environment that we're in the US right now where it's just very, very hard to get federal legislation on anything past, let alone anything related to tech. So, when I look around the world, I think sometimes small places may be able to do more, but I also feel that people are universally worried everywhere I go, but a bit pessimistic and not certain about what can actually be done, which is why in the sort of short instance, I think really pressuring the platforms is incredibly important.
The Australian mining tycoon Andrew Forrest is actually suing Meta in California because his image was used. I think it was something like 250,000 times and people all in Australia ended up losing money on scams, because they said they had his image saying that something was a good investment opportunity. And it's remarkable the number of people... I know a college professor in Malaysia whose image was used. It's really like people that we might not think of as persuasive or somebody you would take investment advice from are getting deepfaked all over the world. It's really quite something.
Alice Marwick:
The other thing I wanted to say is that Donald Trump introduced an executive order on March 6th that's called Combating Cybercrime Fraud and Predatory Schemes Against American Citizens. And it specifically directs the Department of Homeland Security to work with the Secretary of State, the Secretary of War and the Attorney General. And they're really focused, I think, on what they call transnational criminal organizations, really sort of placing the blame on those organizations and trying to get other countries to target those organizations when they're operating within their borders, which is a little bit of a cudgel approach.
I think it's just we're going to use our military capacity or our intelligence capacity to go after these international criminal organizations. It's really noticeable to me that there is not anything in this executive order that talks about any of these intermediaries that Anya and I have discussed, like the telcos or the platforms or the banks. It's really just about the perpetrators of these scams. And these are international criminal syndicates. There's people, they're being pursued by governments around the world. They are not easy to find. They are not easy to target.
And because they are located in different jurisdictions and because they move around from place to place, it really does require international cooperation. But I think we're seeing more of an appetite for regulation in the United States, simply because scams are a bipartisan issue because they affect everyone's constituents, right? This is not a red issue. This is not a blue issue. This is an issue where Americans are losing money that they cannot afford to lose to this vast network of shady criminals around the world. And so, I think there's a lot of buy-in for regulation. I think we have to go a little bit further than just let's go get the bad guys.
I think any kind of legislative effort has to be more sophisticated than that.
Anya Schiffrin:
When we first started looking at this, I mean, apart from Alice's report, there wasn't that much in depth research. And I feel like in the last six months, we've seen a lot, and I think we have to give a shout-out to the journalists who've really covered this. So, one thing, The Financial Times has done a superb job and they've written a lot and CNN did a piece on how China last year really, really cracked down on a lot of this because I think that there was sort of kingpins going back to China and it was difficult to punish them.
Although of course, a lot of times the money was routed through Singapore and the Organized Crime and Corruption Reporting Project, OCCRP, did a terrific series where they wrote about Georgian scammers and actually the sort of kingpins behind them were in Israel and the ICIJ has done a whole series on cryptocurrency. And of course, Craig Silverman has written a huge amount on this. So, I think that these cross-border networks of journalists have done a really good job of writing about what Alice is talking about, which is the sort of cross-border nature of these crimes and also putting a human face to these stories.
Because when you see a profile of some, it's so sad, some sort of retired Swedish journalist who's duped out of her life savings, it really, really brings home how terrible all of this is. And I think that will probably help. I hope, I think Alice is right. I think that's going to help people realize that this is our family and our friends, and really nobody's benefiting except for the criminals, the people that are trafficked and have to scam people are in need of help and the plural people that are getting scammed. So, I hope that this will be an area where we can also get together and help fix the problem.
Justin Hendrix:
I want to ask you just a little more detail about know your customer and this idea of maybe having more requirements in particular on the platforms. And I know folks have advocated for this in different ways over the years. I think about the Check My Ads Institute in particular that I know has proposed mandatory due diligence, know your customer standards for platforms. Do you see this anywhere in the world being considered as a serious legislative proposal, maybe even from a country or a set of countries that would address that choke point that you talked about, whether it's Meta or Google or others?
Alice Marwick:
Well, I think you see know your customer ads, they come out of anti-money laundering and counter-terrorism financing laws, right? So, there's many jurisdictions where you see these laws applying to banks and financial institutions. It's just when you get into the social platforms that I don't think anybody has seriously been able to operationalize these regulations beyond just thinking they might be great theoretically.
Anya Schiffrin:
Yeah, that's absolutely right. I mean, remember after 2016 when Ann Ravel, the Federal Elections Commissioner said that we should have know your customer for the platforms for political adverse, just in the way that we did for banks and financial institutions, which have been doing it for years. And then the platforms always say that it's too difficult because the scale is too large, which I think, I don't know, I find kind of impossible to believe. I mean, I think it would be expensive and time-consuming, but I don't really see why they couldn't do it. If you're taking money from people for an advert, why couldn't you just figure out who they are?
Justin Hendrix:
Well, it's of course one of your recommendations that advertisers should be verified and there should be accurate and easily searchable databases of ads and advertisers, which I know Europe is pushing hard on. You also say folks should disclose and label all synthetic media. Now, this one strikes me as a tall order. So, I don't know, how do you think about this, Alice?
Alice Marwick:
I mean, it's really interesting because even when companies right now are doing voluntary watermarking, so Google, there's voluntary invisible watermarks on synthetic media that's produced by a Google tool. You just have to go to a different LLM or a different image generator and it'll remove the watermark. So, none of that stuff is being taken very seriously, I think, by the tech platforms or by social platforms. The problem is that unless you can really enforce this kind of requirement, which I do think I think you should do. I think that having some of the content labeled as synthetic is better than having none of the content labeled as synthetic is that we're talking about criminals, we're talking about scammers.
We're talking about people who are not going to abide by restrictions because they are not abiding by the law in the first place. So, I think one of the things that I would like to see, and I think a lot of people would like to see would be for the frontier model companies to voluntarily include watermarks on any piece of content that's generated through their system. There are interoperable standards for these things. They could easily be implemented. Now there will always be open source models where you can get around it, but what you want to do is make it harder for people to circumvent these regulations.
You want people to have to go an extra step rather than literally just using ChatGPT to make their scamming materials.
Justin Hendrix:
We're clearly in the beginning of these agents or agentic move in artificial intelligence. Anybody who's messed around with these systems, I'm sure can attest. They're capable of some extraordinary things, particularly when it comes to using vast amounts of information, daisy training, different tasks together. Seems like a scammer's paradise really, an incredible tool to be able to accomplish many of these things, perhaps with less human labor. Is that the next version of this report? Is that the next thing that you'll have to focus on?
Alice Marwick:
I think we have to see where the tech goes, right? I think the fantasy of agents as it's being promoted by tech companies would absolutely work very well for scamming. Some of the most harmed people in this entire ecosystem is the human trafficked people who are working in the scam compounds, right? And if you replace all those with bots, that would be one instance in which I would have no problem with AI taking human jobs, right? That's fewer people victimized because in many ways, these people are at the absolute bottom of the totem pole.
But I think what I worry about is that decreasing those points of friction, decreasing the ability for there to be the place where a human steps in is like, "Hey, a bank teller says this, why are you doing this?" I heard a story where somebody said that their CVS checkout clerk told them not to buy gift cards, right? These points of human intervention. So, I do worry about that. The other thing I worry about is the fact that AI is relentless. It doesn't need to sleep, it doesn't need to eat. It can go much faster and harder than a hacker can or a criminal can.
When this becomes a 24/7 thing, when there's no human check on it, I do think it has the possibility of increasing all types of cyber crime of which scams and frauds are certainly one.
Anya Schiffrin:
I think it's going to really get to the point where it's almost impossible to wire money out of your bank account. That's what I think. People will have to go in person to get any transaction done. You think about all the things like voice printing, right? It turns out that that's not foolproof. So, I think it's going to be very, very hard to come up with technology except to stop this from happening. And hopefully, people will become more educated and realize they just can't send any money overseas, but I mean, virtually. But I think also the problem is people love the convenience. That's the other problem as well.
So, I'm not sure that you can get millions and millions of people to just go to the bank every single time they want to transfer money.
Alice Marwick:
Yeah. At the end of our first scam primer, we talk about the problem of spam and how that's in many ways analogous to the problem of scams. Spam threatened to make email almost unusable. And spam was solved through a combination of three forces. First, there was regulation. There was the CAN-SPAM Act that actually put penalties up for spamming. Second, there was better technical. There was sort of better technical know how and better technology, and that Google and others integrated better spam filters into people's email inboxes. And third, there were social shifts in that they were more able to identify spam when they saw it. And I think that in order to truly combat scams, we'll need all three.
We'll need the regulatory approaches, we'll need technical solutions, and we'll need social awareness.
Justin Hendrix:
And certainly, we'll need international cooperation, which is one of the main things that this report leaves you with. And I know both Anya, Alice, you are great conveners of folks around the world and great connectors. I look forward to perhaps having the opportunity to join future discussions on this topic. And I will direct my listeners to this report and encourage folks if they want to add to this discourse, certainly at Tech Policy Press, we'd love to see contributions on it. So, Alice and Anya, I appreciate it very much.
Alice Marwick:
Thanks so much for having us, Justin.
Anya Schiffrin:
Thanks a lot to both of you.
Authors
