Home

Why Do People Fall For a Fake Robot Lawyer?

Keith Porcaro / Oct 14, 2024

Image by Alan Warburton / © BBC / Better Images of AI / Virtual Human / CC-BY 4.0

Late last month, the FTC announced a settlement with the startup DoNotPay for falsely advertising an AI “robot lawyer”: one that claimed to help people “sue for assault” and generate “perfectly valid legal documents.” You might remember DoNotPay from another dubious marketing stunt: when it tried to find an attorney willing to parrot ChatGPT in a Supreme Court argument.

On the surface, this is just a bogus AI claim gone bad. But the reality is that people will keep falling for fake robot lawyers because they have no other option.

Imagine you need to find a lawyer right now. Maybe your landlord has illegally locked you out. Maybe your unemployment claim just got denied. Maybe you have a dangerous domestic situation that you need to get out of. Where would you go? There’s no 911 for legal help. And unless you’re charged with a crime, you don’t have a right to a lawyer.

If you find a lawyer who will take your case, you may not be able to afford them. And subsidized legal services organizations are able to help only a fraction of the low-income clients who come to them. In other words, most people will end their search in the same place they started: on their own.

Now imagine this: right now, as you read this, somewhere, a small army of high-priced lawyers have been working for hours on some time-sensitive business concern. (It doesn’t matter the hour: somewhere in America, there are lawyers at work.)

This is the access to justice crisis: an office full of corporate lawyers burning the midnight oil while an ordinary person can’t get the time of day. Walk down to your local courthouse, and the vast majority of cases will have someone without a lawyer. And that’s to say nothing of the dozens of legal mazes that people run through alone just so they can solve the problems of living: claiming disability or benefits, getting health coverage, working out a custody arrangement, getting a landlord to make a needed repair.

Small wonder that trust in lawyers and courts declines year after year. Or that more and more people decide the law is not meant to protect them. Every time someone gets lost in a maze of legalese, their faith in legal institutions falters.

Ordinary Americans’ desperation creates a vacuum that AI aspires to fill. ChatGPT’s disclaimer that it should not be used for legal advice is not enough to hold back the tide of legal advice chatbots on OpenAI’s app store or the public statements from investors that AI will become everyone’s doctor, lawyer, and tutor, or the misleading marketing claims about GPT-4’s bar exam performance. Nor has it stopped a wave of companies and nonprofits offering AI-driven legal assistance tools. Not all are as brazenly branded as DoNotPay: rather than robot lawyers, they might offer ‘gray advice’: an attempt to toe the line between unregulated self-help and highly regulated legal services.

Unfortunately, what users get might be worse than no help at all.

When a fake robot lawyer makes a mistake or gives bad advice, ordinary people might not notice until it’s too late. In law, those mistakes can be serious and irreversible. Give someone the wrong voter registration information, and they could be disenfranchised. Calculate someone’s income incorrectly, and they might lose healthcare or benefits. Botch an AI translation, and an asylum seeker might have their claim rejected. Fail to include an argument in a legal pleading, and a client may lose the chance to make it later.

Beyond obvious mistakes, chatbots can set users up to fail in subtle ways. Two people might receive wildly different advice just because of innocuous differences in how they tell their stories. A chatbot might not recognize when a user has a complex or urgent issue that requires additional help. And even a chatbot that provides sound advice may leave a person with tasks they just can’t complete—from serving court papers to navigating a hearing.

And unlike human lawyers, who are legally responsible for mistakes that harm their clients, would-be robot lawyers disclaim any responsibility for mistakes, or even that their services are suitable for any purpose at all. The fine print is clear: users are on their own.

A single FTC fine is not the end of the story. As long as there are Americans desperate for legal help, someone will try to fill the void with AI.

One step regulators could take is to force supposed robot lawyers to have the same responsibility to users as lawyers do to their clients. If robot lawyers are unable to wave away mistakes with disclaimers, they may be forced to figure out how to deliver advice that actually helps people resolve their problems.

But here’s the truth: more lawyers aren’t coming. Meanwhile, more people than ever are coming to court without a lawyer. They’re met with jargon-dense forms, outdated websites, barely functional filing systems, and confusing processes. Many courts don’t even have information desks. Court staff and judges must figure out how to help more people without more resources.

This is AI’s siren song: with enough money, it will make up for a chronic underinvestment in people and developing a legal system that works for all.

It won’t.

The truth is, we don’t need a superintelligent robot to repair people’s trust in the law. We just need to redesign legal institutions for a low-lawyer reality. Look hard enough, and you can spot glimpses of a better future: problem-solving courts, court navigators, forms that help people assert legal defenses and even ideas for redesigning physical courthouses.

Look hard enough, and you can imagine a just legal system that just needs fewer lawyers. Otherwise, ordinary people will continue to seek out robot lawyers, quality be damned. They’ll continue to believe that the law is not meant for them. And they’ll be right.

Authors

Keith Porcaro
Keith Porcaro is the Rueben Everett Senior Lecturing Fellow at Duke Law School.

Topics