Home

Donate

What Kafka Can Teach Us About Privacy in the Age of AI

Justin Hendrix / Nov 3, 2024

Audio of this conversation is available via your favorite podcast service.

Today’s guest is Boston University School of Law professor Woodrow Hartzog, who, with the George Washington University Law School's Daniel Solove, is one of the authors of a recent paper that explored the novelist Franz Kafka’s worldview as a vehicle to arrive at key insights for regulating privacy in the age of AI.

The conversation explores why privacy-as-control models, which rely on individual consent and choice, fail in the digital age, especially with the advent of AI systems. Hartzog argues for a "societal structure model" of privacy protection that would impose substantive obligations on companies and set baseline protections for everyone rather than relying on individual consent. Kafka's work is a lens to examine how people often make choices against their own interests when confronted with complex technological systems, and how AI is amplifying these existing privacy and control problems.

What follows is a lightly edited transcript of the discussion.

Woodrow Hartzog:

I'm Woodrow Hartzog. I'm a professor of law at Boston University School of Law.

Justin Hendrix:

I'm excited to talk to you today and our conversation was prompted by a paper that caught my eye, as I should admit here, an English major who was interested in tech policy. This paper you've co-written, "Kafka in the Age of AI and the Futility of Privacy as Control." Why did the two of you start with Kafka?

Woodrow Hartzog:

Thanks for having me on. It's a pleasure. I'm a big fan of Tech Policy Press, actually, I should say long time listener, first time caller. So we started this paper because years ago Dan wrote a paper where he originally compared the modern privacy predicament of people's personal information stuffed in digital dossiers, less like Big Brother and more like Kafka's The Trial, an endless series of dead ends and frustrations and complexity with a person trapped in this bureaucratic maze. And he and I have been talking for years about the limits on privacy as control, privacy as consent. A lot of the standard playbook of ostensibly autonomy enhancing protocols in the law to protect people, they just weren't working. And so he had this idea to revisit this paper of Kafka and combines with the insights that we've seen about how law is developing in the world of AI.

And this was the introductory essay for a symposium held at Boston University School of Law hosted by the Boston University Law Review that looked at privacy at the crossroads and said, we've seen where we've been, now where are we and where are we heading? And we thought that Kafka was the perfect entry points in that direction because it showed how the individual control approach was failing and how individuals, just like in Kafka's work, were not only powerless in a system that rendered them vulnerable, but even when ostensibly given the choice ended up making decisions that were against their best interest. And Kafka's work darkly shows this side of human nature, which we thought was worth pinpointing. And we tried to think about maybe there's another way out of what we would call this autonomy mess, this privacy as control mess, which we've ultimately ended up proposing as the societal structure model as opposed to the individual control model. And so that's what led us to this point.

Justin Hendrix:

So I want to get a little bit into those competing models and to how those work and to why you prefer that societal control model. But maybe let's just set this up for my listeners who aren't privacy scholars and for myself, I am no privacy scholar. How would you characterize the individual control model and how is it expressed in regulation today?

Woodrow Hartzog:

Sure. So let's go back a little and first ask what is privacy, which is a difficult question, but it's the first question that I ask my information privacy law students on the first day of class. I walk in and I say, "Let's define privacy." And by far the most popular answer I get is control over personal information. The idea behind control information is really attractive. It is autonomy-enhancing. It is empowering, it gives people freedom to make their own choices. This is perhaps one of the most popular conceptualizations of privacy from Alan Weston. We have a lot of these models that all focus around the general idea that if we are empowered to choose for ourselves, who we expose ourselves to, then our privacy has been respected. And the way that this manifests in privacy law is through consent requirements, through transparency notions, trying to make sure that we're fully informed about the choices that we make and that we're asked before anyone makes any choices about us.

We see this in the Illinois' Biometric Information Privacy Act that requires consent before someone can use biometrics. We see this in a lot of the, I agree, buttons and cookie banners that we are faced every single time we log onto a website. And there's a lot of wisdom to the idea of privacy and control and it's really attractive, but it breaks down at scale control unfortunately and consent is a really broken way of thinking about privacy for at least three reasons.

First, privacy as control gives us a lot of choices which are overwhelming. It's one thing to click the, I agree, button one time, but of course we don't end up clicking it just one time. We end up clicking it a thousand different times. And it goes from an initial looking at, oh, look at all the options I have, to, let's just get this over with because we have other things to do besides read privacy notices and opt-outs and we want to just live our lives. And so control, as I said in another work, ends up feeling like a DDoS attack on your brain. And so it feels overwhelming.

It's also illusory. The idea that we should have privacy as control is great, but the way that it manifests in the digital world is through user interfaces. So we get a knob or we get an opt-out button or we get a link to click on to opt-outs, but it always is a set of free design choices for us. So it's not as though I can call up Google and say, only track my privacy on Wednesdays when I'm going to get a pizza and when I'm going to work or when I'm lost. And for the rest of the time don't track it. It would be wonderful if we could just call up Google and dictate however we wanted it, but that's not the way in which control is given to us. Control is given to us by, of course, a pre-selected set of options, all of which really are honestly fine with most tech companies. And so it's an illusion of choice rather than an actual choice.

And the final reason that control is a broken model is that it's overwhelming. I'm sorry, is that it's... Let me go back. The final reason that control is broken as an approach to regulating privacy is that it's myopic. Whoever decided that control was the right way to go about it really didn't game it out at scale. And the reason I say that is that it's one thing for one person to make a decision about me, but of course my choices about what information I reveal to companies don't just affect me. My family keeps taking DNA tests and I'm like, "Please don't do that." But of course this is the story of us and information about me is used to train models that are then used to power facial recognition algorithms that have a disproportionate effect on marginalized communities like people of color and members of the LGBTQ Plus community.

So, the idea that the collective wisdom of billions of individual self-motivated decisions is the best outcome for privacy, I think, is misguided. Because our decisions can be modulated through design, we can become acclimated to being watched. And so there are all sorts of reasons why deferring to individual choice might not get us to the best societal outcome because we basically agree to whatever we can be conditioned to tolerate, right? And over time that might not be what's best for society. We might lose the ability to meaningfully engage. And so for all these reasons, we think that the control model is really limited as an approach to regulating information privacy.

Justin Hendrix:

Okay. So then alternatively, you propose that the societal structure model is the way to go. You point to many other scholars who have helped build up this alternative view of privacy as a societal value, not just about individual interests. How does the societal structure model work and how does it differ in the types of constraints that it would put on organizations that collect data?

Woodrow Hartzog:

The societal structure model that Dan and I propose is built primarily around focusing not necessarily on empowering individuals with control and consent requirements, but rather imposing affirmative substantive obligations on those that are collecting our personal information or surveilling us to act in ways that are consistent with what we want for society, for human values. So part of that includes relational obligations. Neil Richards and I have written about duties of loyalty and Jack Balkin has proposed information fiduciary obligations, and there's a host of scholars that are now riding in the area of relational duties and relational obligations, which would be part of this societal structure model. Also, outright substantive prohibitions on certain technologies that are just too dangerous for society are probably good ways of thinking about a societal structure model. Evan Selinger and I have called for the prohibition and outright bans on facial recognition technology because on balance we think these tools are more dangerous than they are beneficial and society would be better off without them.

And so the societal structure models takes as a starting point, not what the collective wisdom of billions of individual self-motivated decisions, but rather starting with human values first and then thinking about what structures need to be in place to empower that, understanding that sometimes information is the story of us and sometimes information is relational in its nature. Sometimes there are societal values that really don't surface at the individual level, but only come up at scale.

So a really good example would be being tracked, your geolocation being tracked. It's one thing to take an individual decision of you and say you are at this drug store at this point in time, which under the individual control model, we say, yes, I agree for you to know where I was or know my data about this, but it's another thing entirely to retrace someone's steps everywhere go for three months because that paints a very different picture of someone's life and also implicates other people in ways that the individual control model just doesn't seem to capture.

There are other legal decisions that one might make. For example, a lot of design restrictions that we see in a lot of the proposed bills or past bills that are happening with respect to something like age-appropriate design code acts that specifically requires duties of care or thinking about obligations designed to limit things like engagements with certain social media, that's a substantive prohibition that doesn't necessarily defer to individual choices, but rather makes a call about things that are dangerous.

The law does this in a lot of other areas. Products liability law is a really great example. We make substantive decisions about all sorts of things that we don't like with respect to our food or respect to other technologies or products that we interact with. And so the societal structure model I think would be a model off that more so than the deferring almost completely to individual choices.

Now, there's something I want to bring up here really quickly, and that's the fact that even in the societal structure model, there needs to be room for autonomy in individual decision making. That to say that the best approach is societal structure is not to completely abandon the role of individual autonomy or choice, but rather people should be protected no matter what they choose. That there should be a series of baseline protections that everyone can rely upon when they're interacting in society such that every decision isn't fraught with a dangerous exposure or the gradual normalization that over time will desensitizes to any sort of surveillance and vulnerability and powerlessness. So I just wanted to emphasize that as part of it.

Justin Hendrix:

Let me ask a question about the societal model and who gets to set those societal values or those baselines. I understand in the individual control model, to some extent maybe you're saying it's a bit more wild west for the entities that collect information, as long as they can get the consent of all the many individuals they serve than perhaps they can just go to town, but in the societal model perhaps have a slightly more top-down approach to deciding what's good for society. But that always raises the question, who gets to decide what's for society?

Woodrow Hartzog:

Exactly. And one of the better parts of the individual control model is that it does take as a given a certain amount of skepticism around policymakers deciding what's best for us, the people. Who's to say that a judge thinks that this is the best thing for society? And what we don't want is policymakers making ham-fisted and reckless decisions about what we can expose ourselves to because that feels very paternalistic, which is a common critique of a lot of these rules.

But the societal structure model, I think isn't just a blank check to lawmakers to decide whatever they think is right, but rather a call to re-center our rules and regulations around human values that anchor all sorts of existing frameworks that we think are pretty good. So if you adopt the relational model, what you're really adopting is a set of rules targeted at preventing abuses of power within lopsided relationships. Which of course is the reason that fiduciary law exists in the first place is sometimes people are on the bad end of a power asymmetry in a relationship. And the powerful entity has all sorts of financial incentives to engage in self-dealing. We are in those bad relationships with tech companies. There's few relationships I can think of that are uniquely as imbalanced as our relationship with Apple or Google or a lot of the most powerful technology companies in the world.

And so to adopt that is really a call to say, let's simply look for things like honesty, which is a human value that I think that a lot of people can get behind or equity and equality or free expression or consumer protection. And looking at the ways in which our rules have been built around those human values, participation, safe and reliable participation in a marketplace, for example. When you make entities trustworthy, you stimulate market activity because people can rely more freely without worrying about the risk of exposure on those companies. And so I think that's the guide that we should be looking to more readily, not in some sort of arbitrary rule, but rather existing and established frameworks.

Justin Hendrix:

So back to Kafka for just a moment. A lot of the focus of this is on the idea that Kafka's characters often make detrimental for their own lives, their own outcomes. Often when presented with a choice, even if a kind of faux choice, they make the choice that's worse for them in that scenario. How is that sort of, I don't know, how does that sort of symbolize the way we're behaving these days with tech firms?

Woodrow Hartzog:

I think that it's completely understandable why people make the choices that they do to expose themselves when interacting in the world, which is now of course the same thing as saying interacting with digital technologies. Because the way it's been woven into our existence, part of it is because people have lives to live and these tools have been woven into almost every aspect of it. And so to try to opt out of it is unrealistic for people because they got to apply for the job or they want to check out the book, or they want to just go shopping without being tracked by facial recognition and targeted by coupons that are personalized to you, reactive to which products you spend the most time looking at. And so people make these decisions to expose themselves for completely justified reasons.

And a lot of it is because, not just because they have to because part of society, but because the benefits, even if they're modest, are readily available to us. It's easy to see how fun it is to use face filters, facial recognition features on apps because they're fun and they're games and they seem trivial, and companies have every incentive to design them to be that way. Whereas the drawbacks, the harms for a lot of these things are a little more remote and dispersed.

It's easy to say, I want to use this face ID to unlock my phone. You're probably not thinking, every time I unlock my phone, I slowly but surely condition myself to the fact that it's good and desirable for my face to be scanned. We don't think like that because that's a collective social harm. And so all of these sort of de minimis negative effects tend to flow underground or be it just under the consciousness and certainly not in a way that motivates individual decision making with respect to individualized risk and benefit models. And I think that that's the maybe sanitized version. And then the really Kafka version is like there's this deep compulsion towards self-betrayal and self-destruction that we just can't help ourselves, even though we know that this roller coaster is going to go off the rails as soon as we get on it, we get on it anyway, because there's some sort of deep-seated human nature.

And Dan and I had an ongoing discussion when thinking about the article about whether Kafka showed that people were made powerless within the systems that were designed to set them up to fail, or whether there was just some deep-seated notion captured by Kafka about the human tendency towards self-betrayal. And maybe it's a little bit of both, I think is where we came out on it.

Justin Hendrix:

I must say what I was reading that particular part of this, which for any of the Kafka enthusiasts or scholars out there that might be listening to this, makes reference to Kafka's story in the penal colony in particular, this set of metaphors for how people might be drawn to dangerous technologies or embrace technology that might harm them. I found myself thinking about all of the AI tools that these days are available to us that normally say right there above the chat box, this thing may give you wrong answers or may hallucinate or may otherwise come up with stuff that's complete nonsense. And yet people are building these things actively into their businesses and using them for school and even more complicated and terrifying use cases I'm sure. But just this idea that for whatever reason, we seem to be attracted to the idea that this thing could be dangerous somehow or that seems to be part of the sales pitch even from Sam Altman and other folks like that, this stuff's dangerous.

Woodrow Hartzog:

Oh, absolutely. One of the things that I've written about in a different article is that the AI doomers are A, also the people who are creating the AI. They're the ones like, oh, this AI is going to destroy humanity, and then they make it as fast as possible. And other commentators I think have rightly pointed out, this is just another version of a hype machine. Look how amazing this thing is that I'm created. It could jeopardize all of humanity, so you should give me lots of money to make it, is the implicit pitch that they make there. And I think there's something to that which is behold the power. So there's probably a fascination with the ability to channel that sort of power that's running underneath some of this. I think that there's also probably a little bit of an optimism bias going on here, which is, yes, it may be risky, but I'm usually pretty good with this sort of thing, so I could probably tell whether it's right or not. So when thinking about is that AI generated search result accurate or not, or summary of search results accurate or not?

And then there may be also an allure to offloading some of the hard things in life. There's so much about AI that promises that you don't have to have awkward situations on dating apps anymore because this person will act as your wingman or whatever is being proposed now, and you won't have to interact with humans because that's awkward and you won't have to worry about the fuzziness of editing. We create something pre-edited for you that sounds wonderful. And so it's probably a combination of feeling a little bit of allure to the draw of that amazing power, combined with an allure of not wanting to do hard things, which happen to also be a lot of the things that make us who we are and make life worth living, but that's not part of the marketing pitch.

Justin Hendrix:

So you get onto why all of this is made worse by AI, and we've already started to transition to talk about AI, but why is all of this made worse by AI?

Woodrow Hartzog:

So the AI is just, of course, the thing that a few years ago we were just calling big data.

Justin Hendrix:

And I will point out you reference Matthew Jones and Chris Wiggins, Columbia professors who have that great book, How Data Happened, they're recent guests on this podcast as well.

Woodrow Hartzog:

Oh yeah, amazing. Yeah, this is really good. And there's a joke. When I first started teaching at Northeastern, I had a joint appointment with the School of Law at the College of Computer Sciences, and my computer science colleagues took me out one night and we had just started talking about AI, and I said, "I have an embarrassing question. What is AI?" And they said, "Here's a joke." They said, "When you're talking with the public, it's AI. When you are asking for money, you call it machine learning. And when you're talking with each other, it's just algorithms and data." And I always remember that as AI is of course just the newest version of whatever it is that we've been talking about.

But what it has done with that preface is that it has dramatically lowered the transaction calls for all sorts of activities. And that's the way in which I tend to think of it in terms of what it makes easier or harder and the signals that it gives. That's really all design ends up doing at the big view is design of technologies either makes things easier or harder or it sends a signal. And what AI has done is that it has exposed a lot of the existing cracks that we've been dealing with for a long time.

Deepfakes while technically maybe worthy of being consideration as their own technology, really just expose a lot of the existing problems around misinformation and disinformation and harassments and stalking. So there's all sorts of, it's not as though deepfakes invented that stuff. It just made it significantly worse, or the facial recognition it was possible to surveil people before. It's just a lot easier now. But the implication of that, when you make something a lot easier, more people are going to do it. It's going to happen significantly more. And when things happen at scale, sometimes our consideration of the problem changes. So I've been working on a paper with Mark McKenna called Taking Scale Seriously in Technology Law where we make the argument that we tend to think of scale as more, that if there's more of a technology, then it's going to happen more.

But sometimes it's different. Sometimes things can be so different when applied broadly that it changes the nature of the problem. And AI is that force multiplier. It is that scale that has caused us to think differently about whether it's okay for our movements to be tracked everywhere we go. We've never really had to think about that before. Whether consent at scale is even a meaningful thing, whether ignoring de minimis attempts at manipulation through things like dark patterns, which we may have brushed off as individual sales techniques when they can be optimized and implemented immediately to billions of people might cause us to rethink the nature of that problem as well.

Or even the idea of harm thresholds generally with respect to privacy law. And so all of those things make me think that largely AI is really just the straw that broke the camel's back on a lot of existing issues. But that's not to say that there are certain technologies that are so unique that it's such a difference in magnitude that it's legally worth treating it as a difference in kind.

Justin Hendrix:

So you do say in this paper that you admire a certain regulations that have come along like the European Union's Artificial Intelligence Act. You call it a milestone in the direction of thinking through the rubric of the societal structure model. What's so great about the EU AI Act and does it go far enough in your opinion?

Woodrow Hartzog:

So now that we've seen the final version of the EU AI Act, I think there are some things to like about it and probably some things that don't go far enough. The things to like about it, particularly at the highest tier of risk, the unacceptable risk tier, I think are really great examples of choices made on what's overall best for society, even if there are certain individuals that would like it. A really great example would be affect recognition, which I think is probably dubious even on its best days. It doesn't have a lot of proof to it. I think it's, but there's incredible financial incentive for people to abuse it. And so the EU AI Act says, we're just going to say that's unacceptably risky and therefore prohibited.

Now, where we might quibble is the kinds of exemptions that are granted to that, which I understand, of course is part of any lawmaking process, but it's a great example of a willingness for lawmakers to draw a line in the sand and say, substantively this particular technology is unacceptable or unacceptably risky, and we're going to prohibit it.

The reason that's hard to do, of course, aside sort of industry lobbying questions aside, is that it requires lawmakers to insert a sort of judgment that if they don't have to make it is a little more defensible from all sides. It sounds good to say, "Oh, we give you the power, we give you the choice to make this choice." And for a long time, it allowed lawmakers to avoid making really hard decisions about the technologies that we wanted to accept and the technologies we wanted to reject. But what we've seen is that if you adopt the individual control model, and if lawmakers continue to fail to draw lines in the sand, we're on track to accept everything eventually. And unless you buy that premise that ultimately all surveillance technologies are justified and worth adopting, then at some point lawmakers are going to have to make the hard calls. Otherwise, we'll eventually become acclimated to being watched and surveilled and our individual control and choices will reflect that so long as they're being presented to us because there's incredible overwhelming incentive for companies to acclimate us to those choices over time.

Justin Hendrix:

I'm struck by the fact that even in the EU AI Act there's such huge carve-outs for law enforcement, for security. It almost is, I don't know, we can all agree that maybe in certain use cases of facial recognition are wrong. At the end of the day, our military might, our border security, those things, we're happy to go that route.

Woodrow Hartzog:

Yeah. I really struggle with this one too. And some, I was a little disappointed to see how broad the exemptions were in the EU AI Act. That being said, I have a lot of talks with people about things like facial recognition in particular, and people say, what about the positive uses of facial recognition? What about the use of facial recognition in our military? Well, Evan and I have called for outright prohibitions on facial recognition technology, it is, at least what I would view to be an improvement over the status quo even if we said facial recognition is so powerful and so dangerous that it's only a military-grade technology, that it is the sort of thing that we recognize is on scale with a lot of other things that really shouldn't be a general use in society, but maybe the sort of thing that you want to develop in certain corners. That's not what I would propose, but at least it's a step towards recognizing just how dangerous these tools are and how vulnerable it makes us.

Justin Hendrix:

So for a paper that spends a lot of time contemplating Kafka, I find it still to be remarkably optimistic. You write that policymakers finally appear to be losing hope, that individuals are able to exercise control over these powerful and bewildering systems. You seem to think that maybe AI is almost like a clarifying moment, that it's, to some of your earlier comments just now, that it's pushing the point to its sort of severe end and maybe more lawmakers will wake up.

Woodrow Hartzog:

So it's funny you mentioned that. Dan and I had a lot of back and forth as we wrote this article about how to end the paper, and we really bounced between some pretty dark, less optimistic, much more pessimistic views. And ultimately, I do think that we ended on a note of optimism for the purpose that you think. Part of that is I'm inherently an optimistic person, and part of that is we are actually seeing a change in lawmakers' approaches to regulating technology. We're seeing it with a lot of the age appropriate design code acts that are getting passed. We're seeing it with substantive data minimization rules that are passing at the state level. We're seeing it in the EU AI Act and the emboldening of relational protections that we might see within privacy.

I actually am optimistic about this change, and it reminds me of a paper that I wrote with my BU Law colleague, Jessica Silbey a few years ago called The Upside of DeepFakes, which sounds like a bad paper, but was ultimately optimistic because we felt maybe this is the thing that can finally help us update the old playbook and understand that these are not things that can be fixed with a simple little tweak here or there, but are rather require a substantive and foundational re-conceptualization of the problem, thinking about privacy and structural terms and societal benefit terms, thinking about information as associated with institutions.

It's not just about DeepFakes, but it's about the integrity of our voting system. It's about having meaningful free expression rules. It's about a lot of things that are really deeper than just this one particular technology does this, collects this one particular thing and does this one particular action. And so if we can accomplish that, then I am optimistic about the direction that we might be headed in.

Justin Hendrix:

Towards the end of this paper You write, "In the end, if we reap one key insight from Kafka's work for how to regulate privacy in the age of AI, it is this, the law won't succeed in giving individuals control. Instead, the law must try to control the larger forces that exploit people and to protect individuals, communities, and society at large from harm." Let's hope some policymakers are reading their Kafka.

Woodrow Hartzog:

Let's hope. It sadly sometimes veers is a little too close to reality to count as pure fiction these days, but if they can take that lesson from it, then I think that I'll feel better about it.

Justin Hendrix:

If they're not reading Kafka, hopefully they're reading Woody Hartzog and Daniel Solove. So thank you so much for taking the time to walk me through this paper.

Woodrow Hartzog:

Justin, it's been a pleasure. Thanks so much for having me.


Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & Inno...

Topics