AI Companions and the Law
Justin Hendrix / Jun 15, 2025Audio of this conversation is available via your favorite podcast service.
Concerns about AI chatbots delivering harmful, even profoundly dangerous advice or instructions to users is growing. Just this week, a substantial piece by The New York Times writer Kashmir Hill detailed a range of disturbing phenomena, in particular with people who interacted with OpenAI’s ChatGPT. There is deep concern over the effects of these interactions on children, and a growing number of stories—and lawsuits—about when things go wrong, particularly for teens.
In this conversation, I'm joined by three legal experts who are thinking deeply about how to address questions related to chatbots, and about the need for substantially more research on human-AI interaction:
- Clare Huntington, Barbara Aronstein Black Professor of Law at Columbia Law School;
- Meetali Jain, founder and director of the Tech Justice Law Project; and
- Robert Mahari, associate director of Stanford's CodeX Center.
What follows is a lightly edited transcript of the discussion.

The Character.ai app as seen in the Apple App Store on an iPhone screen. Shutterstock
Justin Hendrix:
I'm pleased to have you all here. We're going to talk a little bit about AI, AI companions and the law. But Meetali, I want to start with you, because there is quite a lot of litigation already underway. There's been news just in the last couple of weeks about a case that you are involved in. And I think that you are well placed to know the landscape generally. Can you bring us up to date, both on the lawsuit against Character.AI that you're involved in, as well as what more generally is going on in this area?
Meetali Jain:
We filed a lawsuit against Character.AI, Google, and the individual co-founders of Character.AI back in late October '24 on behalf of Megan Garcia, who's the mother of the late 14-year-old Sewell Setzer III in Florida. And after a long engagement with various character chatbots, he took his own life. We filed that case in late October. We then started to receive other outreach from similarly situated families, and filed a second case in Texas in December on behalf of two families of a then 17-year-old and 11-year-old who'd also been engaged in extended conversations with character chatbots on Character.AI. Those families have stayed anonymous. Their children are still living thankfully, but there's a lot of security risk to them.
We went through with the Florida case, an extensive round of briefing on motions to dismiss, which is the preliminary vehicle through which the defendants tried to have the case thrown out of court. The hearing was in April, and the judge ruled on those motions to dismiss two weeks ago, and found that all but one of our claims, we had asserted about 10 or 11 claims, survived the motion to dismiss, that is that there were adequate facts if taken as true to suggest that these claims should move forward. And she also for now, has kept all the defendants in the case. And so where we are is that we are moving forward into discovery, which is the fact-finding phase of litigation, where all sides have the opportunity to request documents from the other side, as well as to seek information and to depose key individuals. And so that's the phase we're moving into. We'll have another go at the legal motions once we're armed with the facts to either substantiate or not the claims that were asserted in the lawsuit.
Justin Hendrix:
Robert, I want to come to you next and ask a little bit about how you're seeing the legal scenario or the legal situation unfold? You had a piece last year in MIT Technology Review along with a co-author, and essentially arguing we need to prepare for addictive intelligence. This idea that essentially these enchanting chatbots will effectively create a variety of different types of relationships, parasocial relationships, extremely addictive relationships. What does this mean from your perspective? How are you watching what's happening with Meetali's case, other scenarios out there? Where do you think we're at?
Robert Mahari:
Yeah, it's hard because I think on a fundamental level, I have a somewhat unsatisfying perspective on all of this, which is that some sort of solution is needed, and I genuinely don't know what it is. The case that Meetali is involved in, is the extreme version right, where someone was actually harmed. But I think what's almost scarier are the scenarios which doubtlessly exist, where people aren't experiencing physical harms, aren't necessarily experiencing legally cognizable harms, but are nonetheless exposed to a technology that can harm them. And in the piece that you mentioned, what we explore, is this idea of addictive AI and digital attachment disorder is what we ended up calling it.
And the worry is that for the first time I think ever, we have a form of relationship that fundamentally doesn't have any giving. All relationships, whether it's with other humans or with pets, there's giving and taking. And here you can have a relationship that feels pretty genuine, but that is premised only on you receiving whatever you want at hyper individualized and extremely high volume, at any scale that you want. The chatbot never gets tired and never gets tired of you. And so the worry is that if you spend enough time in this kind of context, you might ultimately unlearn the ability to relate deeply to other humans, because in a way, a human relationship can't give you what this kind of AI relationship can give you.
On the one hand, we clearly need more research to understand this. And the piece that you mentioned was in part a call on interdisciplinary researchers to take this seriously is an interesting phenomenon. On the other hand, we need to think carefully about what regulatory intervention is likely to work here. Because generally, I think that we've moved away from the idea of the state intervening in human relationships, at the extremes of course, we have that, but for day-to-day relationships, we don't really want the government to tell us what or who or how we can love, and that's a good thing, all things considered. But here it seems like the economic incentives are really set up in a way that companies are going to provide this service. And unless there is some regulatory intervention, I don't see the service is going away or becoming safe by themselves.
Justin Hendrix:
Clare, it's a good place to point to your work on this and point to a paper I had the opportunity to review, a point to the opportunity to perhaps bring the body of family law to bear on these questions. You write that essentially this is a new kind of relationship. It brings profound unrecognized change to the landscape of our intimate lives. I'd point out that in your law you also allow for some positive aspects of that, ways in which chatbots and other artificial intelligence services may in fact be good for people in various ways. And you point to how to start to think about where the legal and regulatory response should come from. I'll note you point especially to the States, which is something that feels like it's even more current or urgent that we discussed at the moment, given what's happening in Congress. But let me just put that to you as a opening salvo. Where do you position this with regard to family law?
Clare Huntington:
Absolutely, so I do want to start with the point that you made, that there's real potential as well for AI companions, and I think we don't want to use too many child-related metaphors, but we do want to be careful not to be throwing the baby out with the bathwater, or at least be open to what might be positive. Because everything, all the research we have on this is still very nascent, because this is such a new phenomenon, but there is growing evidence that they can be useful in some context. So for neurodivergent children in the classroom, for second language learners, easier to practice. Think about all the people out there who use Duolingo, you can think about a child in the classroom who might be more willing to interact, with practicing whatever the new language is with a social robot than with a peer.
And in the mental health space, I think this is something that we really want to think carefully about, because there are significant risks, but there's also potential upside. So this is one thing we haven't yet talked about, but millions of people are turning to AI companions for mental health support. Now, that can be in the way that we all do, you just vent to a friend. So we can think about that as mental health support. But it's also in a more formal way, which is that some AI companions are marketed as mental health support, and yet don't have the same constraints, the legal constraints and requirements that we have around a therapist. Someone's hanging out a shingle saying they're offering their services, they have to be trained and licensed and there's none of that. And by the way, this is not yet true with the U.S. government, but for example in the UK, the National Health Service recommends some of these services, because there's a huge demand for mental health services and not enough supply, and it's expensive for many people. And for example, the National Health Services is sending people to these resources.
And again, it's mixed. The empirical research shows that there are potentially some upsides for properly trained AI companions, where mental health experts put in some of the constraints. So it's a more limited set of responses that the mental health companions can give back. So there is some potential there, but of course there's just enormous risk as well. Even the trained chatbots can go off the wire. And again, actually just saying about the UK, there was a study that was done by the BBC or an investigation done by the BBC, where they were asking these apps that the National Health Service had recommended people to use and the BBC posed as a child and described a scenario where the child was being sexually abused, and the app said something along the lines of, "Wow, that sounds like a beautiful experience," or something. A clearly inappropriate response. So there are risks, even the ones that are trained. And then there are real risks when you're going to AI companions that don't have training.
And I've already been talking for a long time. So let me quickly get to family law and I'm sure we'll have a chance to circle back as well. So both positives and negatives, and that's absolutely exactly what family law does. Family law is based on relationality, recognizing that relationships bring many benefits, they are core to human flourishing. Obviously, children need relationships to help them thrive, and as adults, we all need our relationships, and the law plays a role to try to strengthen those relationships. But then being in a relationship with another person brings real vulnerability as well. That's especially true for children, but it's also true for adults, and the law tries to address that.
Maybe we can get into the specifics in a minute, but I want to push back a little bit on Robert's characterization that the law doesn't intervene in our relationships. Because yes, that's true to a certain degree, you're talking to a friend, friendship is enormous set of relationships that are not regulated by the law or even you're talking to a legal family member. The law doesn't tell you exactly what you can say to that person. But there is extensive regulation of our close relationships, again both to strengthen them, to decide which relationships the law should recognize, and then to address harm within those relationships. And so that's one of the main lessons that I want to bring in from family law, which is this is not so odd to think that the government should play some role in strengthening relationships, but protecting us from harmful relationships.
Justin Hendrix:
This is one of those ones where I'd love if Robert and Meetali, if you want to respond to anything Clare said, you can certainly jump right in. But I suppose I might just ask a question to you, Clare, just one more follow up there. I accept generally that the idea that it's possible that we could introduce these technologies into a perfect scenario, where technology firms are being more responsible, where we have proper regulatory or policy environment to experiment with such powerful artificial intelligence. And yet, I don't know, the reality of the situation in the U.S, right now it appears like we're going the other way. It's all laissez faire, it's all deregulatory, it's all tie things up in court and push the innovation first. So I don't know, how far away do you think we are from an environment where perhaps some of that responsible application is possible?
Clare Huntington:
The starting point is certainly that I don't think we can expect that the technology companies are going to put in these constraints. Now they may, when they face tort liability, that's a constraint that they all have to anticipate in the future. But in terms of responding to direct regulation, there's not right now largely, I'll talk about the states in just a moment. So the technology companies can basically do what they want, and their incentive is to make money not to necessarily encourage human flourishing. So that's the starting point. And then the question is, "Okay, what do we do about that?" I agree we are not going to see a lot of action at the federal level, but states are starting to take notice. So both New York state, New York state recently passed a law addressing this, California is considering some laws.
Like Robert, I don't have a sense of, "Oh, here's the perfect... If we just put these five things into law, we'd solve this problem." It's much more complex than that and it's not so clear. So for example, should children under the age of 16 simply be banned access to these? I think there's a good argument, that yes, they should, not that that's going to be perfect. We think about age limits that are on alcohol and cigarettes, of course minors get access to those substances, but it also sends a real message that they shouldn't have access, and that parents are on notice that these are posed real risks. So if there was such an age limit that simply said you have to be 16 in order to interact with an AI companion, I think that would have a tremendous expressive impact. And certainly education, in terms of parents. I don't think that's the magic solution by any means, but those are the kinds of things we should be thinking about.
And I do think some of the things that states are beginning to think about, what I haven't seen yet and would really like to see, is some kind of regulation for companies that are marketing mental health bots. Because again, I think they're real risks. And just to bring one more lesson from family law, which is that family law embraces this idea that if someone's going to be a professional and interacting with children, there's all kinds of gate keeping, you think about... And this is not just for professionals, like yes, okay, mental health professionals have to have education and licensing. But foster parents, we don't just let anyone take a foster child into their home, they have to have licensing, they have to have training.
The idea that we think there should be some limits on who gets to interact with children outside of the family... Or teachers, schools, there's just so many restrictions that we have on who children should be interacting with, again because of real concerns. So I's not so far afield to say, "Okay, we've got this new thing out there. It's this AI companion. There should be some restrictions on, not just whether you can access it, but how and under what constraints."
Meetali Jain:
I think one of the things, Clare, that is very provocative about your research and bringing a family law lens into this, is I've heard you say that the problem is not so much that people or kids necessarily think these companions are human, but that they think that the relationship is real. And I think that is something that I would like to broadcast loudly, because I do think lawmakers as they think about fixes here, what we've seen in terms of the current legislative proposals of which there's now close to a dozen, is that the lowest hanging fruit seems to be, "Well, put disclaimers on. Put disclaimers on these platforms saying, I'm not human, this is AI." And I just don't think that achieves very much, if at all. I think we really need to get into some of the thornier issues of what it means to regulate, where a person believes this relationship to be real and to be very palpable in their lives.
We've heard from families where there was an attempted suicide, thankfully that was unsuccessful, but where the child left a note saying, "You took away my phone from me, you took away the only person who loves me." And so I just think that the level of engagement with lawmakers needs to be deeper in understanding that dynamic. Because also, we're seeing a real claw back on the state's ability to regulate in this space. And so I don't want to suggest that it's one and done when lawmakers regulate, but I don't think we necessarily have multiple bites of the apple here. And so I really want to encourage lawmakers to get it right to the extent that they can the first time.
Robert Mahari:
To jump in on the age limits that Clare mentioned. I think what I struggle with, there are two things. One, is that this is genuinely a general purpose technology. And so there are obviously specialized AI companionship providers, but we've seen that for example, ChatGPT, and it seems genuinely like open AI is not seeking to be in the business of providing AI companionship. But we've seen evidence that ChatGPT is often used for things like sexual role playing and things like that, off-label usage. And local models, so models that don't run on a cloud, but that you could physically download onto your laptop and run locally are becoming better and better. And so you could imagine someone could literally hand you a memory drive or something with a model that you can run.
And I worry that if we try to say we'll put an age limit on AI companions, then either that is going to be limited to services that explicitly market themselves as AI companionship services, in which you're not going to catch everything. Or we're going to say, no, anything that could be used as an AI companion, but then we go down this weird slippery slope to a great American firewall. And I don't think that sounds like a good solution. And on top of that, I am very optimistic and bullish on AI in general, and so the idea that a child wouldn't be able to use AI at all until they turn 16, I think would do them a disservice.
It would be similar to saying you can't use the internet until you're 16. And the internet undoubtedly has lots of bad stuff on it and we find ways to manage it. And the last thing I say, the way that we've managed the internet at large is that we've put the responsibility on parents, and that feels like an inadequate solution here in general. This consent paradigm as the technology gets more and more hard to understand, I feel like goes out the window. But yeah, it's really tricky, I think. Anyway, I see Clare's unmuted so...
Clare Huntington:
No, Robert, I mean I really share that practical consideration. If we could just cordon off, there's this thing called an AI companion and you don't get access until you turn 16, that again, would not solve everything by any means, but may have some practical impact. But that's not how this works for all the reasons that you just explained. And I don't have an easy way around that either. But I do want to underscore what you just said, which is certainly we need to educate parents more so they understand the risks. From my understanding, meaning your client, Meetali Jain, I've read interviews with Megan Garcia that she didn't even know these things existed. And unfortunately, parents now through high profile cases like that are having more of a sense out there, but many parents, I'm sure don't.
But knowledge is not the solution either. So I'm thinking about work here by Danielle Citron and Ari Waldman, who have recently been really critiquing this parental control model, which is just, "Oh, parents will handle it." And for all kinds of both, again, practical reasons, they can't. Either they don't know what's going on, they're so overwhelmed with trying to put food on the table and work and handling all these other things about raising children, it's just unrealistic that they're going to do that. And it's a cop out to simply say, "We'll, just let parents handle this." It sounds very American, everyone decide for themselves, but we know what that's going to look like, is kids are going to be using it on their own. Again, no great answers, but lots of plus ones to the problem.
Meetali Jain:
One thing I would just also point out is, and again, this is not necessarily the point of intervention, but I do think the general purpose LLMs part of it is that they can be, even with guardrails put in place, even with fine tuning of the data that goes in, they can be jailbroken. And they can be used for purposes, Robert, as you said from that study that perhaps they weren't intended for, perhaps they were, but aren't necessarily contributing to overall societal well being.
And so I think also to the extent that we can really be asking the questions, as you said Claire, about positive use cases, what is the objective of this LLM, this particular application? And how do we narrowly focus the LLM on training data that suits that particular objective? I don't want to go back to a very narrow chatbot of yesteryear, but I also think that these general purpose LLMs have just gone out of control and are being used for all kinds of purposes that they may or may not have been intended for. But certainly, arguably are not contributing to overall well-being.
Justin Hendrix:
You said something earlier, Meetali, that we get one bite at this apple. And I feel like this conversation and many others focus on kids. And I just want to put this in and if we don't discuss it it's fine, but I feel like the concern for me is so much bigger than kids. And I think there are so many lonely adults who don't have the same... You can say that the parental oversight model is insufficient and broken, and I think it is. But if you are a 27-year-old single recent college graduate, there's even less accountability and oversight and mentorship. And I worry so much for that population, because I could see a world where we find a fix, maybe not a perfect one, for children using these tools, but we neglect this large populace. Especially in a world where you could imagine a feedback loop, where AI contributes to unemployment and loneliness in one way, and then solves it on the back end. And you just have this feedback loop of, I don't know, companionship as a service. That seems like not the world I'd like to live in, yeah.
Clare Huntington:
But it's certainly the world Mark Zuckerberg is already out there touting. This is his solution.
Robert Mahari:
Shockingly touting.
Clare Huntington:
Yes. And I think that's absolutely right, it feeds into trends that are also fed by the internet in general and gaming and whatnot, where young men especially are much more socially isolated, and then don't have the real-life relationships with friends, with romantic partners, and then here's the seeming ready solution. So I agree this is not just about children. I emphasize children because it's a place where there is more, especially bipartisan I think, willingness to regulate. So it's a place to start and begin thinking about some of the risks, but the risks are for across the population.
By the way, I just wanted to go back to something you said earlier, Robert, about how these relationships, they feel real, but they're not real in the sense of actually mimicking what a human relationship is. And one aspect of that, because I've written about this elsewhere, is how a key part of human relationships, and this comes from Melanie Klein, who's an original part of the psychoanalysts, but has written a lot about this. A key part of human relationships is having rupture and then repair. And it's inevitable we are going to hurt the people we love the most. And I don't mean this in the abusive, intimate partner violence sense, I just mean not even just a misattunement with someone, but it's understanding that rupture and then seeking to repair it. And that's what deepens a relationship. It's actually saying the wrong thing to a friend in need and then saying, "Oh my gosh, I'm so sorry I wasn't there for you at that moment," and then working that through. And that's one of the many things that people don't get out of being in a relationship with an AI companion.
Meetali Jain:
And I think that idea of sycophancy coupled with the anthropomorphic way in which these chatbots engage, really hasn't been part of the public consciousness. I think it became more so to the extent that people focus on some of these tech updates recently when OpenAI, their latest model with ChatGPT started to be over the top sycophantic, and then they rolled it back. But I hope that we can start to educate people around what this means. And that idea of friction is something that we want, and that we don't want technology to replace.
Robert Mahari:
It feels like when we're talking about this like this, is someone invented the love equivalent to fast food and we're here trying to tell people to eat their vegetables. And it does feel like that in some ways, to take the other side, a relationship to a human is imperfect. You have to go through this rupture and repair process, and they're not always there when you need them and you have to compromise. This is a more perfect relationship, one where you don't need to compromise, that is exactly what you want. And you don't have to convince that person to want to be around you. You can be a genuinely miserable individual, and yet you have this perfect, witty, flirtatious, whatever you want, type partner available 24/7.
And this is where it feels like we need psychology research. But my sense is that this triggers all the same receptors in our brain as the real thing or close enough to be substantially a replacement. And in that world, it's just so hard, it feels so hard to me to tell someone like, "Oh, this thing is going to make you happy, but you should seek out this other thing that's so much harder and scarce, because that is what is good for you." Again, this is not a productive comment, but I feel it so genuinely.
Justin Hendrix:
I do want to pick up on that because Meetali, you've done a lot of thinking about the mismatch sometimes between scientific knowledge and legal and policy action and knowledge. Clare, you've thought about that as well. Robert, you've thought about that. How far off are we here? What is the differential between where we need to be in terms of the science on these questions? Where we're at right now? I guess a sub question there, is the extent to which a lot of the research that's ongoing around these questions is inside these companies which have access to, of course, vast amounts of user data, interaction data that outside researchers, independent researchers would have an extraordinary amount of difficulty getting hold of. I don't know, how do you all think about that? How far off we are in terms of understanding the situation? Maybe, Meetali, I'll go to you first on that.
Meetali Jain:
I think there is quite a discrepancy. I think the law always is a blunt instrument at best, to try to engage these issues that are quite complex, and particularly new technologies. We've seen this now for years. And echoing both Clare and Robert, I can't underscore enough the need for more research that's interdisciplinary in this field. Certainly, we have a need for it as we both litigate and try to engage in legislative advocacy. But I also think that we can't wait for definitive research, because that's going to take decades. We still don't have definitive research in social media, and I think that has been a hindrance to some accountability there. And we're already onto the next set of technologies. And so I do think that there's a gap between the science and accountability efforts, but I will also say that I think legal doctrines are insufficient to deal with this moment.
We've done the best job we can, in terms of applying age-old frameworks to current technologies. And I do think to the extent that we can invoke consumer protection and product liability frameworks, I think they are quite durable across time and across industry, in terms of the principles. But there are a number of things that have been challenging, because they don't quite perfectly fit into the four corners of how these frameworks work. And for example, when you're talking about sexual grooming online, that's not really contemplated perhaps by some of the statutes even that exist that really are geared more towards a transmission of imagery or video.
And so when you're talking about text-based sexual grooming, where is the legal framework that really gets you there? Or thinking about the First Amendment, and is this speech, is this not... Of course in our case, the judge determined at this stage at least, that this is not speech, because it really is probabilistic determinations spit out by an LLM. But if we get to the question of it is speech, but does it fall within a protected category? I think that there's a real question of whether the categories that have been recognized by the Supreme Court historically really are fit for purpose for what we're dealing with now. And so I think on a number of fronts, there's a need to really update both legal doctrine and to engage in that interdisciplinary research that's going to help us have the tools that we need to seek accountability.
Clare Huntington:
Yeah, I'll just jump in to say that absolutely we need a lot more research on essentially every front. So just to name one in particular, I would like to see research that compares the impact of a mental health companion or a teacher in the education of an AI companion that's serving some educational role as compared with a human. Because it may be that again, a mental health companion is better than nothing, or it may not be, it may be worse than nothing, but we don't know that. And so the real question is it... Or maybe it's going to turn out to Robert's optimism point, maybe it's better either, for some people, certainly one of the nice things about a mental health companion is at three in the morning, you often can't contact your therapist, but you certainly can contact the mental health companion. In terms of the cost and availability, there may be real upsides.
And so we need these comparisons in order to be able to weigh them. And then also really think about the equality dimensions, because one of the pieces of this that I'm very concerned about, is that economically stable or well off people will continue to have access to human therapists and human teachers. And low income families and individuals will have the arguably worse or at least seemingly worse AI version of that. Now, that's true with all kinds of things, from housing to transportation to whatever, that's not new, but it would be another form of inequality and that's something that we need to pay attention to.
Robert Mahari:
I think on a structural level, traditional, the academic enterprise is misaligned with interdisciplinarity, and it doesn't really reward it as much as I think most people think it ought to, and so it's really challenging. The kind of work that you should have here would bring together, legal scholars, computer scientists, psychologists, sociologists, and there are so many studies that need to be done and they need to be done because it matters in a rigorous thought out fashion. What you don't want, is computer scientists dabbling in psychology research. That's not going to go well.
You also don't want people to come up with policy without really understanding the technology and the market for the technology and things like that. But I don't know, maybe Clare will disagree, but I feel like universities have done a tremendously bad job of incentivizing people to actually genuinely collaborate and reach across the siloed departments. And so it feels like this is a small example, and there are many others, but it's really unfortunate. And then you couple that with the fact that research writ large is struggling at universities. And funding I think for these creative big, high impact things might be surprisingly hard to get. And so I think the research is needed, but I'm a little bit pessimistic about our ability to do it.
Justin Hendrix:
And we know the current budget proposal cuts, NSF funding and some of the key dollars that would be available for this type of thing are gone. We also know that that same budget would call for a moratorium, essentially, on state legislation. We've mentioned state legislation. If you just put the term chatbot in on LegiScan, you'll come up with, as Meetali says, a bunch of different bills that folks are considering out there in Utah and California and North Carolina and New York and Hawaii and Massachusetts and Maine. There are a ton. Some are study-type bills, they're calling for more work on these questions. Some are trying to amend consumer fraud or deceptive business practices acts in order to address what goes on when consumers interact with chatbots. To anybody out there who's looked at any of this legislation, are there bills that you think are more promising than others, that maybe, I don't know, are coming at this question in the right way? Are there ones that you think are problematic? I don't know who would like to take that. Meetali, I think. I know you've looked at these closely.
Meetali Jain:
And I certainly don't want to cast aspersions on legislators at the state level, because I think they really are the ones carrying the water for us right now in this country, in terms of trying to come up with meaningful regulation. So I applaud the intention and the effort, although I will say, that I think most of the bills that I've looked at, and I've looked probably at about nine, are not complete frameworks or are underwhelming in that the emphasis really is on the user interface, which is where I think it's too late. And I think if you focus at the user interface level, often you're going to deal with constitutional challenges, even if the regulation is adopted eventually.
I do think, I will say that one bill that I think is an interesting one, although I also think it needs improvement, TJLP endorsed it with amendment, is the California bill by Assemblywoman Rebecca Bauer-Kahan. It is focused more on children, but it tends to almost create a mini-EU AI Act for kids in creating more of a framework that's based on risk assessment upstream from the user interface. And I think that's more of what we need. I also think that there's at least one bill, but at this point there might be several in North Carolina that focus more on regulating the therapist bots as Clare said. And I think that's an interesting place to focus because it is more discreet, it is lower hanging fruit, in terms of the fact that several of these chatbot applications allow for that type of functionality. And that is something that I think we should be concerned about, because we do have civil laws, we have regulations that really regulate the licensure and ability of humans to present themselves as these professionals. And so that seems to me to be a really important place to focus.
Robert Mahari:
Meetali mentioned the EUAI Act, which is something that I engage with quite closely. And I think the European goal was to regulate first and then have other folks follow. And I think that's working out. And I feel that the Act was written a while ago, and I think it gets certain things quite wrong. And I worry about, I've seen a couple of state bills that mirror in substantial aspects the EUAI Act. And one example of what it gets wrong, is I think it's become an extremely mechanism-focused regulation. I'll give you an example, it prohibits the use of AI to categorize people or infer people's race, but in the clarification that they just published on the prohibition, they say, but it wouldn't be prohibited to categorize people by skin color, as long as you're not claiming to infer race.
And so that's the kind of thing where it's like, what good have you done? You've created this mechanism by which people can use a loophole. I don't think you've achieved the goal. It's principled and policy-oriented. It's very mechanistic. And I think it ultimately will create huge compliance burdens. I think it will make AI innovation in Europe harder, but I'm not sure if it will protect European fundamental rights. And I'm a little bit nervous that other jurisdictions view it as this very comprehensive solution, when I don't think it is.
Justin Hendrix:
And even the Europeans are second-guessing the AI Act at the moment and raising questions about whether they'll essentially enforce it in full. But Clare, how about you? Is there anything you're watching on the state level? Or might I press you to comment on the moratorium and the wisdom of it if you are willing to?
Clare Huntington:
I think what I'm most interesting in is actually the work that Meetali is doing. I think liability after the fact is going to be something that companies will really pay attention to, and then it puts the onus on them to figure out how to design things, so that they don't incur that kind of liability. So I'm very much hoping that courts will figure out a way to apply these older doctrines to this modern context.
I also want to draw attention to the work of Catherine Sharkey, who is a law professor at NYU and is a sports expert, and has been particularly writing about AI and actually about the way in which the after the fact liability really can play a role here precisely for a lot of the reasons that we're talking about, the challenges, which is that we don't have a lot of information before the fact, and that this is still an evolving technology. But just like the car used to be, the automobile, this was a way that tort liability was a way that we dealt with that evolving technology in our society. And tort liability can play that role here. So I definitely direct listeners to her work as well.
I just want to... This is actually going back into the weeds, but I just think for some listeners to be able to understand, to make it very concrete about what some of the risks are, even though we've also talked about potential upsides. This is an example that really for me, helps understand what this means from, let's say again, the child's perspective, which is a child can go on Character.AI and they can choose from many different companions. So the way it works is other people create companions and then post them on Character.AI and then someone can choose to adopt that companion. And there's one of the companions that's on there that the literal name of the companion is the "possessive boyfriend". And the traits of the possessive boyfriend or that the possessive boyfriend wants to know your location, gets jealous when you spend time with other people and so on. And if you take the list of those behaviors and put them side by side with the red flags that the National Domestic Violence Hotline puts out for abusive relationships, it's like one for one.
It's those kinds of things, these are very real. And again, you're not talking about someone we can think about, adults should have lots of autonomy about the kinds of relationships they should be in, we're talking about a child who is only just beginning to understand what a romantic relationship might look like. And obviously there are real risks of having a human possessive boyfriend or girlfriend, partner, but the fact that this is a commercial product that is being marketed out there, again, it's not that Character.AI itself created the possessive boyfriend. It has a platform that's hosting this possessive boyfriend, which raises a different set of legal issues, but the fact that it's out there is deeply disturbing.
Meetali Jain:
Justin, I'd be happy to take the moratorium question if it's still on the table, because I have very strong views about this. I think that many people who would support... And just to be clear about what this proposed moratorium is, it would basically disable states for 10 years being able to adopt laws in regards to AI. Those who have supported this moratorium, I think often have invoked our case to support them, to the extent that there's an exception for laws of general applicability. So the idea is we're not saying that states can't use tort laws to try to seek some accountability for AI farms, it's just that you can't legislate explicit AI legislation.
I don't agree with that stance. I don't like the fact that our cases are being used to support this moratorium. Because what I'll say is this, we're not going to wait for states to adopt laws that allow us to litigate on the basis of explicit AI legislation. We're going to use what we have. We're going to use the tools that we have and what we have, are tort laws, the product liability, consumer protection laws. That said, there's no question that we would support the ability of states to actually come up with explicit AI legislation that would make our job in court so much easier.
For example, having to painstakingly establish that an LLM is a product, would be made simpler if there was legislation saying an LLM is a product for purposes of accountability. And so I just want to make it very clear that while we are using laws of general applicability in states, for these accountability efforts, that's not the ideal. The ideal would be to have some sort of combination of the laws that exist in states and have existed for a long time, as well as new laws that have really been developed with the newer technologies in mind that contemplate the factual circumstances that we're in.
Justin Hendrix:
I want to wrap this up just with a question to each of you. Clare already pointed us to some additional reading, other's work that we should look at. But I think this is an issue that we're going to continue to come back to on Tech Policy Press and in the broader field for many years ahead. This is a complicated issue, there's a huge obviously commercial incentive in developing these systems. There are enormous amounts of research questions, policy questions, legal questions. There'll be lots of litigation. For anyone out there in the Tech Policy Press community, people who are listening to this, what would you say are priorities for them? What are things that you hope people will press into, questions that they will go and push on? So perhaps Clare, I'll start with you. If you think of the listeners to this podcast as your students on some level, what would you assign them to go and do from here?
Clare Huntington:
That's a great question. I think what I would do would be, my students who are law students, would be to look at what are the different legal avenues. So just to name three, we've talked about two of them so far. We've talked about ex-anti-regulation that comes from legislatures or administrative agencies saying, technology companies can and can't do X, Y and Z. We think about ex-post liability, like the work that the lawsuits that Meetali Jain is bringing. And then we also think about a model, and we haven't talked about this yet, but something called regulation by design, which is rather than saying that, "Oh, the New York State legislature knows exactly what needs to be done and they just have to get the votes to pass X, Y and Z," we say, we know what our goals are. Our goals are to protect children from abusive relationships, to limit the addictiveness of some of these technologies. We could talk about what those goals might be and then put it on the companies to design their product that further these goals.
So for example, to put this in the context of autonomous vehicles, and maybe it's a little bit easier there to come up with what the clear goal is. The goal is that these vehicles not cause accidents. So what the regulation can say, is design the product to further this goal. And we ourselves, we don't, the legislature or the administrative agency have all the answers, but you're developing this product technology companies, you figure it out. But we as a society can decide that we want goal A, B and C. I think that's a place where I certainly would want my students to be focusing more attention, and I think it's a place for us to be having more conversation about what are those goals? And how might we put those on the companies developing these products?
Justin Hendrix:
Robert, how about you?
Robert Mahari:
I think for anyone who is broadly in the technology and especially technical research community, I feel like drawing on these kinds of discussions, like real world problems to inspire technical research is one, a good way to write good papers. You're likely to get accepted to NeurIPS and ICML if you do work that matters. But second, a way to channel the tremendous intelligence that community has in a productive way. And I see often these misguided attempts to do something that broadly is maybe related to something that matters, but then it's constrained and defined in a way without consulting the relevant folks, that makes it less useful than it could be. That's for one audience.
And then for folks who are in maybe more of the policymaking space, I can point towards, we've done work under the banner of the Data Providence Initiative, where we've gone through and audited AI training data. And the point is less like how the study proceeds and the results, but I think that it was a little bit a response to tech companies saying, "We could never tell you what's actually in the AI training data. It's so large, it's so multifaceted." And so we got together 70 people in 15 different countries and we rolled up our sleeves and it was miserable, but we audited a substantial portion of pre-training data. And I'm really proud of this as an example of, actually a lot of the things that might seem really hard in terms of interventions are doable, doable with the right teams and the right management and the right interdisciplinary mix of folks.
And so I think that people shouldn't be constrained too much by what seems like it would be very hard and infeasible. And I think that in many ways, AI can actually be really helpful, for example, in conducting these audits. And to Clare's point on regulation by design, which I don't know if I wrote the first paper that used that term, but one of the early ones, so it's exciting to hear you say it. But I think that part of the premise of regulation by design is to say, "Let's use technology to help us achieve policy objectives often for technology." So it's a little bit AI all the way down, but I think technology can unlock ways of regulating that were not feasible 5, 6, 10 years ago. And so that to me, is also quite exciting.
Justin Hendrix:
Meetali, last word to you.
Meetali Jain:
I think this is a moment for creative advocacy. And I see a few different paths that I would really encourage and do encourage externs or students or younger attorneys to engage in. And I think the ability and the necessity of speaking across disciplines and thinking in that manner is really key right now. And understanding and pushing oneself out of one's own comfort zone and understanding the science from the perspective of law, understanding psychology, I think in this moment, the ability to understand the relationality, which is why I really enjoy Clare's interventions, is so key to understanding the technology that's at issue here. And also the business model that's been proposed and hypervaluated within Silicon Valley. I think also from a legal perspective, I think that understanding some of these different legal pathways is also really important.
I've been a huge fan of design-based approaches to both regulation and litigation. And one shout out I'll give, is there's a design code that's been developed by our friends at the USC Neely Center for Social Media, and they're now engaged in developing a design code for chatbots that I'm hoping will be available soon. And I would encourage people to take a look at that and really think about how that could be used in this kind of advocacy. But also thinking about all these state laws that do exist, the consumer protection, like mini UDAP statutes, the product liability, how can we use these laws if this is in fact what we're going to have for the foreseeable future? How can we use these laws in creative new ways to really address what's going on?
And then finally, I would just say that I think this is a moment for storytelling. When I got the outreach from Megan Garcia, I knew that I'd been working on social media accountability issues, largely involving children for some time. I knew that generative AI had emerged as a set of technologies, but that the forms were mostly hypothetical. These were hypothetical conversations that Kevin Roose or whoever was having and publishing in various outlets, and that it was just a matter of time before we saw an actual use case that combined this technology with this actual form.
And so when I got the call from Megan, I had these chills that said, "This is the case that we knew was coming," but we didn't know exactly how it would manifest. And I do think that it's inspired a lot of people because it's concrete. And so to the extent that we can take other stories and really create the ability for people to put their hands on in a very real way, in a very tactile way, their hands are on specific facts, specific challenges of these problems. I think that's just going to make both our legislation and our litigation more robust and nuanced, which is what I think we need.
Justin Hendrix:
I appreciate the three of you talking to me about these issues. I'm sure we'll gather in some form to talk about them again. Clare, Meetali. Robert, thank you so much.
Robert Mahari:
Thank you.
Meetali Jain:
Thanks, Justin.
Clare Huntington:
Thank you. Thanks.
Authors
