Protecting Privacy and Dissent in an Age of Authoritarianism and AI
Justin Hendrix / Jul 6, 2025Audio of this conversation is available via your favorite podcast service.
Helen Nissenbaum, a philosopher, is a professor at Cornell Tech and in the Information Science Department at Cornell University. She is director of the Digital Life Initiative at Cornell Tech, which was launched in 2017 to explore societal perspectives surrounding the development and application of digital technology. Her work on contextual privacy, trust, accountability, security, and values in technology design led her to work with collaborators on projects such as TrackMeNot, a tool to mask a user's real search history by sending search engines a cloud of ‘ghost’ queries, and AdNauseam, a browser extension that obfuscates a user’s browsing data to protect from tracking by advertising networks.
Building on such projects, in 2015, she coauthored a book with Finn Brunton called Obfuscation: A User’s Guide for Privacy and Protest. The book detailed ideas on mitigating and defeating digital surveillance. With concerns about surveillance surging in a time of rising authoritarianism and the advent of powerful artificial intelligence technologies, I reached out to Professor Nissenbaum to find out what she’s thinking in this moment, and how her ideas can be applied to present day phenomena.
What follows is a lightly edited transcript of the discussion.
Justin Hendrix:
Professor Nissenbaum, it's great to speak with you today. We're going to talk about the present moment and hopefully relate some ideas to that moment that you've been working on for some time. My initial thought to reach out to you came in a conversation I was having earlier this year where someone said, "Remember that great book Obfuscation: A User's Guide for Privacy and Protest?” They were thinking maybe we need to go back to that work and some of your other work and kind of think through in this moment where there's such heightened concern about surveillance, there are so many things happening in the world, in this country, in the United States in particular, that have us focused on questions of personal freedom, personal information, the tensions between what we do online, what we do in the physical world, and the extent to which those activities are surveilled.
So I'm excited, I want to try to figure out how to lift from your body of work some things that are hopefully useful for Tech Policy Press listeners in the moment, but also give them ways of thinking perhaps that may be useful from a policy perspective as well. So maybe I'll start just by asking you a couple of basic questions. I guess the first one is really around your theory of contextual integrity, the way you define privacy, the way you think about privacy in the modern age. It feels important to me that the listener hears from you where you're at on that, how you explain that.
Helen Nissenbaum:
I felt a lot of concern at the way privacy was ... The approach that people were taking to privacy protection outside of the domain of government where we had some regulation and it was through this mechanism of notice and choice. And I think we're all familiar with privacy policies and so forth and how absurd they are and continue by the way to dominate, and it struck me that the approach that was being taken simply dumped the decisions onto the individual to decide whether they consent to some particular way of using collecting data about them. And by the way, just to know, I never say your data. I always say data about you, and this is really ... It's become even more important today and we can come back to that if you like for two reasons actually. One was that we all know that people simply don't know enough, they don't read policies and so on. This part is well known and many people in the area have shown this with empirical results.
But the other may be more subtle reason is that, in fact, privacy is a societal value. And so although on the one hand none of us is able to make decisions that serve our own preferences or our own interests, I also thought that privacy was something whose benefit went beyond the individual and was more a societal value than only an individual value. And the theory of contextual integrity, one of the main propositions of that theory is to claim that privacy is not just an individual good, but it's a societal good. The idea behind contextual integrity is that privacy is appropriate flow of information in societies and that the evaluation of whether a flow is appropriate or not depends on the context or social domain in which this flow was taking place. So that's roughly it. And the theory develops an idea of contextual norms and what the norms, what the constitution of the norms and so on. But the whole basic idea then, the one line is privacy is appropriate flow about an individual, and second is that appropriate flow serves individual good, but it also and maybe predominantly serves societal good.
Justin Hendrix:
So let me just ask a couple of questions just to plumb this a little bit more. You've talked about the difference between sort of privacy and secrecy, the ability to kind of keep secret certain information, that is different from being able to keep information private. Can we plumb that just a bit?
Helen Nissenbaum:
Yes, and I'm so pleased you asked because it gets to a pet peeve that we're constantly hearing about. People and often I have to say computer scientists will say, "We have to weigh privacy versus utility." And they identify, in this kind of landscape, they would identify privacy or claims to privacy as preventing the flow of information. So instead of saying privacy versus utility, what they really should say is that it's information, the stoppage of information flow, which I use the term secrecy, which is the kind of appropriate term for that, like stopping information leakage versus utility because contextual integrity wants to say that appropriate flow should serve utility in its different manifestations. And so I clearly want to distinguish between secrecy, which is sometimes good and sometimes problematic versus contextual integrity where what we're seeking is appropriate flow. And sometimes by the way it means flow, meaning it's a good thing that information is flowing.
Justin Hendrix:
So the idea that the context can change and the way we think about what is appropriate with regard to privacy can change as well. I mean I suppose that slightly brings us to the moment. The context has changed over the last couple of decades, not just in the United States, but certainly it feels that the context has changed drastically even here in just the last several months. This is kind of a bad question at this moment. When you started studying these things in 2004 or around the time that your paper on contextual integrity was first published, we were at the dawn of the internet age, the browser, the ad tech ecosystem, fair to say social media was a relatively early concept. I think Facebook was founded for instance in 2004. Observing how the world has changed in those 20 years.
Helen Nissenbaum:
So this is also a good moment, Justin, to just mention that among the regrets is to use a term like context, which has so many different meanings. And I have a couple of articles somewhere in my list of publications that by the way, anyone can download for free, sadly including the gen AI scrapers but we'll get to that. And the question is what do I mean by context and contextual integrity? And really what I mean is social domain. So when we talk about healthcare, that's a social context. So we're talking about social context, education, politics, family and social life, commerce. These are contexts. Things change. So technology changes. But when we say that we use the telephone and now we're using WhatsApp, the context hasn't changed. If we're talking to friends, if we're talking to physicians, if we're talking to the members of Congress representing us, those are the different contexts that we live in our social life, different contexts, different norms.
Now, the changes in technology, the changes in what is possible and not possible and the environment, many changes that we could also call contextual changes can affect the norms. So it could be that you could always be asked what your social security number was. That used to be okay. Let's say when you went to buy something or applied for a job, and we found that information in certain contexts made it possible for people to steal your identity. And so yes, the norms, the expectations that people have about what's appropriate flow in fixed, in given contexts, that can change, absolutely must change also to enjoy the benefits of technology.
Justin Hendrix:
Well, not to be dim about it, but what has changed you think in those 20 years? What would you point to as the most significant things that have changed?
Helen Nissenbaum:
So from the point of view of contextual integrity, we have this thing called the norm, a contextual norm. That norm, sorry to get into the weeds, has five parameters. When you're describing a flow of information, either one that you're considering whether it's appropriate or not, or maybe you're designing a technology and you want to figure out what you should allow and what you shouldn't allow with this technical system, we represent a flow in terms of who's sending the information. So it's sender, receiver, data subject, the type of information and something that I call a transmission principle, like under what conditions is this information flowing? And when we give values for those parameters, we do it in terms of roles. And so we may say that a physician, a subject, a patient can share health information with a physician under a transmission principle of confidentiality and that might be deemed an appropriate flow.
So one of the huge things that has changed is we have new actors that we don't understand. Early days, we didn't understand their role. Now when we say physician, the articulation of the role of the physician is really important. We have an idea of what physicians do and why they're important and why it's really important to share with the physician. Society is very smart and they say it's really important for people who are ill to share everything with physicians, but the only way we can protect that is to ensure that the physician understands they must accept this information under conditions of confidentiality. This isn't a simplification.
Now along comes a social media platform, let's go with Meta or Facebook. And we haven't characterized that actor, and actually we've had this before, this is not a completely not ... In fact, you could say the Hippocratic Oath was that. What Hippocrates was noticing was that there was this individual who was a physician and now we had to create the rules around a physician. And when we had telecom companies who could have access to a person's conversations, the law came in and said, "This is what a telecom provider is and here are the rules regulating what a telecom company can and cannot do." Cannot record a conversation except little bits of it to ensure the quality, that the quality of their wires was good but they couldn't record, and then come back later and quickly see what conversation you were having with your spouse, with your investment advisor and so on.
And so we're in this gray area where there are now actors in society who are receiving and gleaning information and we haven't figured out how to solidify the roles such that we know what the norms are in relation to those entities. That's a huge change. You're asking me about change. That's huge and it's a huge threat.
Justin Hendrix:
It seems like a compounding one, particularly in the age of AI. You must be thinking about this daily, just what it means to have such huge amounts of information hoovered into these foundation models.
Helen Nissenbaum:
Yeah, absolutely. Once again, when we talk about utility, when we are talking about benefit, there's always this very slapdash, I think, lazy way of talking about it where people will say, "Oh, well there's the benefit to humanity." And of course we have to pay the price. It's like cost benefit without digging in and saying, "Let's be clear about the precise flows of data which we should try to promote in order for the benefits to accrue to society at large." Nobody is supporting that kind of thinking unfortunately.
Justin Hendrix:
Let's talk about this concept of obfuscation, and this book that you wrote 10 years ago. So I suppose 10 years after this concept of contextual integrity, you released this book and you talk about the idea of obfuscation, what it is. You call it, "The production of noise, modeled on an existing signal in order to make a collection of data more ambiguous, confusing, harder to exploit, more difficult to act on and therefore less valuable." I mean, the entire book is about ways to think about how to do that, how to essentially frustrate systems that would collect the data or would sit atop those flows of information and gather information or perhaps otherwise would surveil people or society. How has your thinking about the concept just of obfuscation changed in the 10 years since you wrote this book?
Helen Nissenbaum:
So let me just say a couple of things. Firstly, the manifestation or the recognition that obfuscation was a thing came about because of these products, these systems that Daniel Howe had created. I went to him with a problem, I thought this could be a solution. And Daniel then generated the first TrackMeNot and later ad nauseam. This was really instrumental to see it in its function and it really has taught me a lot about how difficult it is to put something out into the world, a piece of technology, good technology and have it just be just really hard to break in. So that's the one side.
And it was talking, at the time this was happening, I was at NYU in the Information Law Institute and I had a couple of post-docs, Finn Brunton being one of them. And I was describing how shocked I was when I went to the university and I was describing this work with Daniel, and one of the first questions during Q&A was like, "How can you do that? That's immoral. You're lying," when you were searching when TrackMeNot was producing fake searches. Somehow the ethics of that struck someone.
And anyway, I was talking about that and Finn said, "Well, it's really interesting because this approach, this obfuscation approach, you haven't invented it actually. It's historical, it happens in nature, animals do it, insects do it, and so on." But what we did try to do is characterize the exact circumstances under which justify obfuscation because we also understood that obfuscation could be used as a weapon of the strong against the weak. And I think we are experiencing that with our current administration, but it's also sometimes the only tool or weapon that the weak have against the strong. And I think that's the part of it that remains absolutely the same. The way in 2006, we were inspired by this. When I say we, Daniel and I, and in 2014, Finn and I really tried to bring this out in the book. I can say more about it, but I'll pause there for a sec.
Justin Hendrix:
Yeah, I wanted to get more into this idea of obfuscation as the weapon of the weak, the tool for small players against more powerful adversaries. And you obviously recount many different versions and forms of obfuscation in the book, many of which are in some ways themselves I suppose in a context of web and technology as it were 10 years ago. I guess I'm wondering, as you have seen various methods of surveillance, algorithmic decision-making systems, machine learning, artificial intelligence, facial recognition, as those things have continued to grow ever more sophisticated, how have you continued to think about countermeasures, obfuscation, how effective those things can be?
Helen Nissenbaum:
And so just coming back, if we think about obfuscation as a type of tool, having gone through the argument, having felt it necessary to address people who said this is a problem, they would say things like, "You're free-riding, you're poisoning the data lake. You're lying." All of these things. It was really important and this is where contextual integrity connects with obfuscation. It was really important in the instances of obfuscation that I think are good ones, and I want to think the ones that I've developed with Daniel and others, I want to think they're justified is that I consider what's going on is that inappropriate data flows and we can call them problematic instances of surveillance are taking place and they're not being addressed. And because they're not being addressed. And I think one of the reasons is that, again, the more powerful, the more wealthy, the more in the know and so on who hold the strings, many of the strings in society are able to capitalize on the vacuum in sensible policymaking to perform some of these profiling activities and I would say unfair discrimination against certain parties.
So that to me justifies some of the obfuscation. It's not just like, oh, obfuscation is a good thing. First you have to justify it in terms of violation of certain kinds of values, including privacy. So that's how these two things come together and why you can think of obfuscation as a solution to some of the surveillance. And I say some because I don't know that it's a solution to all.
Now, when Daniel and I wrote one of our articles in which we were addressing these concerns that had been raised by TrackMeNot, and we end that article by saying, "The world that we want to get to is a world where you don't need us." So it's like we want our tool to be such that, and potent enough that these folks will come to the table and be willing to hash out acceptable policies of ... Now the word surveillance has a negative taint and maybe it should be kept as a negative taint, but maybe we should say oversight and so on as the positives. Sometimes it's a good thing, sometimes it's problematic, let's hash it out and if you don't, we are just going to obfuscate. So it was sort of a tool and a weapon to get us to the point where we're not needed anymore. That was like, oh, the heavens will open, the sun will shine. And unfortunately that never happened and that leads us to where we are today.
Justin Hendrix:
Trying to think about ... So I guess, and the question I suppose is if nothing happened, if the conditions essentially just got sort of worse, I suppose that means that the ethical justification for obfuscation is just simply more clear. But that doesn't necessarily mean that the kind of technical means to obfuscation have gotten easier. It seems like they've gotten far more difficult.
Helen Nissenbaum:
Yes. And I've got the thought ... I mean I've even come to this way of thinking that we have to actually create a startup because we need to find a way to support ... Anyone who's created open source systems, knows that it's all very nice and exciting to create the system. And then you have the task of maintaining the system. And it's not just maintaining the system against an unchanging environment. It's that the very environment you're in keeps changing and the only way you can have your system keep working is you have to adapt the system. So we know that the say browser, the evolution of the ... Because we sit in as an extension of the ... Now I'm just talking about our products. I know that there are others and maybe they've solved problems we haven't, but thank God for open protocols because that's the only web protocols, that was the only way we could do it. Secondly, we could do it in Firefox, they were open to it. Chrome, we've had a lot of trouble. We've been banned. Safari, we didn't even try because Apple's such a closed system.
So there's a lot of pushback constantly, and you go backwards if you don't go forwards. There's no standing in place. And nobody is really eager, nobody with money, power and so on is eager to assist in an effort like ours because obviously it goes against their benefit. However, what really always puzzled me was why certain members, commercial actors wouldn't want a tool to obfuscate. So if you're ... Again, I'm coming back to investment advisors, maybe you're a bio startup and you've been doing searches on certain patents relating to your latest invention. Now copyright holders must be terrified. So it's not that it's just like, oh, private individuals fighting off everyone else commercial in the realm.
And this was something I credit to Vincent Tubiana, who was one of the people, he now works for the French Data Authority. He had written a lot of the code of TrackMeNot and maintained it, volunteer maintained it for many years. He was like, "Companies should be worried that Google is maintaining record of all searches and who knows what they're doing, could analyze what companies are interested in, what investments they may be looking at and so on." And now maybe we will be able to attract attention of other business parties who are aware that what they're doing could be gobbled up by the big AI tech companies.
Justin Hendrix:
In a way that it's sort of like this approach to artificial intelligence that we're watching in the world, it almost makes everyone vulnerable. It increases maybe who would fit into that category of vulnerable populations that you had imagined in your book.
Helen Nissenbaum:
Exactly. First of all, and I'm sure we all feel it, we want to post academic, I want to post everything that I've done on my website, and I want it to be free because I'm getting a salary, but I'd love my ideas to spread. So on the one hand, I want to push stuff out. On the other hand, do I want to make it so that any old scraper can come along and just scrape that material? And the very thing that feeds my career is credit and attribution. It's not necessarily money. I mean, I'd love to make money off of these things, but forget about it. But if someone's going to invent the ideas that I have already invented and not credit me, then I lose the, what's the right word, currency of my trade. And this could apply to anyone who deals in sort of information goods, including artists.
Justin Hendrix:
What you just said seems so obvious to me, and I feel that this is the fundamental argument that many artists have been making, but also people from different crafts and other types of cultural makers, if you will, and now maybe more professionals are beginning to recognize the same principle that there's something inherently, a kind of theft that's occurring. And yet there's a powerful set of arguments that are well-backed by billions in marketing that this is progress, that we're better off if we're able to take Helen Nissenbaum's ideas, synthesize them with machines, and come up with the next set of hypotheses that she simply wasn't fast enough to get to.
Helen Nissenbaum:
Yeah, I mean, on the one hand, just like, yeah, that's exactly the argument, and we can launch into, we could say things like, I have a right and so on, and I don't even want to go there. Because to be honest, I think those kinds of arguments just fall on deaf ears because we're so used to saying, "Oh, well, if people are going to love cheap hamburgers and terrible french fries, let them." If I'm a chef who wants to give fresh or nutritious veggies and no one comes to my restaurant, well so be it. That's the way it is. And we can't say like, "Oh, I have a right." No, we have every entitlement to, but what's going to sway? What's going to persuade? And here it's a much more complex argument, and it may have to do with, okay, we are the first generation of academics, artists and so on who are going to be crunched by this big machine, and then this machine is going to produce stuff.
And people, I'm not at the forefront of this discussion of how it's going to just get more and more stupid as it feeds on itself. But in the meanwhile, you're going to disincentivize people going into certain professions, for example, education and research, and ultimately everyone's going to be eating hamburgers and fries because there is no way for ... So I want to think of, if no artists can make a living and so on, the culture, where's it going? So it's a big societal question, and I'm hoping someone can make the argument. This is not going to be the argument I'm really able to make because it feels too close to home.
However, I do want to bring other people to the table to say, if you see, you need your product to be out in public in order to succeed, either an artist or a business or a biotech company, you want your product to be out there because that's how you get customers and publicize and so on. And yet, if you put it out there, it's going to be scarfed up and who knows who's going to have access to the output? And your business is in danger, your company is in danger, your existence is in danger.
If Congress gets its way and says no regulation of AI for the next 10 years in any of the states, we're in a situation where there's not going to be any social regulation, and we have to fend for ourselves, and this is where obfuscation comes in, because the only power we have if we can't actually shape what the big guns are doing is to, on our side, on the client side, so to speak, create the confusion that then undermines trust in the output and the product. So that's why I think obfuscation has to be an answer, and we should work on it. Everybody.
Justin Hendrix:
How is obfuscation different from sabotage?
Helen Nissenbaum:
It is a form of sabotage. So yeah, when you talk about sabotage, you're saying, okay, someone has a certain goal and what you're doing is to sabotage their achievement of that goal. So at a high level, it is. That's just a specific type of sabotage that's suited to a particular threat model, if I can use that kind of security lingo and it's justified, morally or ethically justified in a certain way, and that's how I would relate it, and also differentiate it from the general pool of sabotage.
Justin Hendrix:
I have a couple of just unformed questions, but I'm going to ask them and just see where we can get to with it. One thing I found myself thinking about, just with even the concept of obfuscation, it brings to mind the idea of masks or masking identity. We're in a situation where in this country there are masked agents of the government who are wandering around, yanking people off the streets, and yet masks are discouraged among those who would protest those actions. We're in a situation where law enforcement entities have access to powerful identity management software for running false identities on social media for investigative purposes. They're essentially masking themselves online in order to gather information. And yet those types of things aren't necessarily available to individuals. How do you think about just the idea of a mask and the concept of obfuscation? Is that part of it? Is it really about protecting identity? Often, is that what this comes down to?
Helen Nissenbaum:
Identity, so if I hear this, and this is sort of interesting, so okay, lots of different directions. We're concerned about identity. Identity is a piece of information about us that we've decided is important to control in certain circumstances. So we protect a person's anonymity because we think that anonymity in certain circumstances is important for the individual. It's also important for society at large, like whistleblowing for example. And so now we don't, let's say in cases that we want to, we've always respected political protest in this country in the past, and we've understood that, we've believed that in itself is a value. Nowadays, we've come to understand that we're not so great in this way and that we have governments who may go after us for political protest. So there are different ways of addressing this identity concern. And here I want to distinguish masking or hiding identity with obfuscation.
And I don't know that one is better than the other, but when you're being prevented from hiding, you may find that obfuscating is a better technique because sometimes you can obfuscate by looking like or seeming like someone else and the party who's your adversary doesn't even know. So I mean, they're different forms of obfuscation and we do a lot of that categorizing of it in the book. And we talk about cases where you want the party to know you've poisoned the data, and sometimes you want to be stealth so they don't know. And I think with identity, we do sometimes want to be stealth.
And it's interesting because we've had a few obfuscation workshops and at one of them we had ... There were a couple of artists who've created glasses. Now what I can't remember is it outfoxes facial recognition system by enabling identification, not by ... Sometimes you have in Google Maps street view or something, they would blur the face, but rather, yes, this is a face, we just can't link this face to your identity. So that's an obfuscating technique as opposed to a hiding technique. And one of the approaches we've tried to make is to say, "Well, we have a right to obfuscate, therefore you need the protocols to be open." That kind of thing. Fallen on deaf ears probably.
Justin Hendrix:
You have obviously been in conversation with policy makers, with others who are thinking about privacy legislation, regulation, tools, rules, et cetera, over many years. But let's just imagine that perhaps there is a change in the political context in the U.S. although we could also talk about what's happening in Europe. They appear to be rethinking their GDPR, for instance. And maybe some aspects of that might be worth us talking about a bit, but just focusing your mind on the U.S. for the moment, if another moment were to come around where there was an opening for real thinking about privacy protections for people, what would you raise to the floor? I mean, it sounds like this would be part of it would be part of almost creating a kind of different architecture for technology that would almost not only ensure, well, let's say contextual privacy, but also create the conditions for resistance when necessary. Is that right?
Helen Nissenbaum:
Well, I think you've just said it really well, which is to say you ideally would go for good policies to improve your policies, recognizing that what we have here is a kind of free for all. Just to go back, I was surprised when people pointed me to Edmund Burke, who is this very conservative political thinker from centuries ago and has inspired a lot of conservatives. I'm like, oh, how could I ... But I went back and they advised me to read because in some sense, in its early iteration, contextual integrity was quite conservative because it said, whatever the norms are, that should guide appropriateness. I hadn't completed the second part of the theory, which said sometimes the existing norms are problematic because they're not achieving values. And when things change, you need to also question the existing norms and so forth.
But back to Edmund Burke, the thing that I found interesting is that he said something like my own interpretation. You're a teacher in a classroom and you say, "I want to give my kids a lot of freedom, and so I'm not going to discipline anybody." And you think this is going to really bring out everything in them. Now just think about the consequence of that freedom. Is it going to make people creative and free of inhibition? What do you think is going to happen if you just take away all the rules of behavior? Do I have to answer that question or do you-
Justin Hendrix:
I think I get it.
Helen Nissenbaum:
Which is, the bullies prevail. That's what's going to happen. The bullies prevail and the other meeker students who may have lots to offer society are going to be cowed into submission and silence. And that's the way I think of the current situation, which is like, what's that guy's name? I forget his name, who was like, "We don't stop innovation, don't regulate. It's just going to ..."
Justin Hendrix:
That could literally be almost any man in Silicon Valley at the moment.
Helen Nissenbaum:
He's a big tech investor.
Justin Hendrix:
Marc Andreessen, I think you're thinking of.
Helen Nissenbaum:
You are quite right. He's like, and yet what I want to say, my response is, if you don't regulate, the bullies are going to win because obviously they ... But the bullies are maybe good in certain circumstances, but in other circumstances it's a terrible thing. So I would like regulation, and that's really where I think contextual ... Some of the details of contextual integrity, I'm not going to die on the sword for those details. They were details that were necessary.
But the one really important aspect of contextual integrity is that the regulation of data flows needs to serve the ends and the purposes of the context and where I've been a little bit successful in the European policymaking arena is to bring that idea forward. I have not been very successful except in small pockets of the U.S. privacy regulation environment. In some cases, they sort of follow that, like in healthcare where we've got sectoral regulation, some of that is happening but not effectively. So if I could change the world, if I could bring about the change, I would love people to understand that privacy regulation has the capacity to promote contextual ends and purposes, and isn't just about holding information secret away from the benefits it could offer.
Justin Hendrix:
So I suppose my last question, that was asking you to imagine in a moment of political opening, I've got a siren going by, so I'm going to just wait one second while the siren passes. So if my last question was asking you to imagine a political moment where perhaps more progress could be made on the regulatory or the policy level, maybe just the moment we're in, what would you encourage the tinkerers, the makers, the people who are on the front lines of encountering perhaps the worst abuses of surveillance systems, what would you say to them right now? What would you encourage them to work on?
Helen Nissenbaum:
Yeah, I want us to create, what's the saying? A thousand flowers bloom, let a thousand flowers bloom. Let a thousand obfuscation projects bloom. But we need you, Justin. I mean, I don't know if you're going to publish this part of the conversation. We need to bring awareness so that we can protect this capacity to obfuscate, and then we need people to obfuscate. We need to publicize the tools that this community creates. Somehow we need to bring unity. I'm always dreaming to be able to support people who could create a convening and a website so that if I said I want to obfuscate this or that or the other thing, I could go and I could find the tools that I need. That would be the message and my hope.
Justin Hendrix:
Perhaps that is a challenge for myself certainly, but maybe others who are listening to your voice right now. Helen Nissenbaum, thank you very much. Folks can download this book for free on the internet if they would like to see Obfuscation in its entirety. And of course, as you say, visit your site to find your other writings and works. What's next? What we should be looking for next from you?
Helen Nissenbaum:
I'm still very interested in contextual integrity because there's been a lot of interest in applying contextual integrity in the sphere of gen AI agents, AI agents. So I'm very interested in that. But it's part of a bigger project in thinking about ethics in relation to gen AI and this concept of alignment, which I think is doing a lot of work and a lot of harm.
Justin Hendrix:
We'll follow that work here and perhaps have you back on to hear about it.
Helen Nissenbaum:
I have got something to say, but Justin, thanks very much for inviting me to this and such a productive conversation.
Justin Hendrix:
Thank you.
Authors
