Home

Donate

Data Rights in the Age of AI

David Carroll, Justin Hendrix / Jul 14, 2024

Audio of this conversation is available via your favorite podcast service.

In this episode, David Carroll, an associate professor of media design in the MFA Design and Technology graduate program at the School of Art, Media and Technology at Parsons School of Design at The New School, speaks to Ravi Naik, legal director at AWO, a consultancy with offices in London, Brussels, and Paris that works on a range of data protection and tech policy issues. Their discussion delves into the evolution of data protection from the Cambridge Analytica scandal to current questions provoked by generative AI, with a focus on a GDPR complaint against OpenAI brought by Noyb, the non-profit founded by Austrian activist Max Schrems.

What follows is a lightly edited transcript of the discussion.

David Carroll:

So thanks for joining us, Ravi. Just in a quick summary, what is the legacy now, if we could call it that, of our case against the Cambridge Analytica companies in the UK?

Ravi Naik:

Well, hi David. Thank you. Firstly, it's such a pleasure to be with you as ever and such a pleasure to be speaking to you about such interesting topics and topics that we did a lot of work together on during the Cambridge Analytica related cases, but also I had the chance to learn from you while we were going through the cases and learn from you about how a lot of these models work. So really thank you for all this knowledge you've given me and I'm really glad to be able to discuss some of that information and how it's now playing out through the court cases that we're doing at the moment.

So looking back at where we are from the Cambridge Analytica cases and what the ramifications and the longer tail of those cases, I think you can distill it into three main issues.

I think one is you've got to keep in mind when we bought your case, this was before the gold standard of data protection as known as the General Data Protection Regulation, GDPR. The Cambridge Analytica case for you is before the GDPR coming into force. So really we were at the cutting edge of the way that legal field was developing, both in terms of the law itself and the precedents that have being set. So it was really a test case for the way these laws get enforced and implemented.

I'd say the second implication was just public understanding, not just how people's data gets used, although that was a big part of it and obviously the wider story was about the misuse of personal information, but really the idea that people had rights and had rights that they could stand behind, that you as an individual could assert your rights and use the power of fundamental human rights to assert claims against companies.

I'd say the third legacy is it really changed the way companies think about using data, the way companies still now think about the Cambridge Analytica effect, the idea that they might get sued the way political parties use data, the way political parties engage with third parties to do their consultancy. And the way the entire dynamic around personal data has changed as a result of the way you bought that case, and the way you stood by your rights I think is really consequential and it's something that I see throughout my day-to-day practice. So as I always say to you, David, really kudus to you for bringing that case and really trying to test the very simple right to ask for access to your information.

David Carroll:

Thank you, Ravi. Yes, that's what caught my eye with this new case versus OpenAI is that it seems at its most basic glance to be a case about the request of data and the full disclosure of it. And indeed the bounds of disclosure to me was a key contested area that our case helped to shine a light on that is not only you have a right to request the data, but you have a right to dispute over the contents of it, the completeness, the accuracy of it. And so indeed when we asked for data under the UK Data Protection Act way back even before President Trump was inaugurated, we did receive some data but then we contested its fulsomeness and that was granted, that dispute, the standing and the ability to do that was granted regardless of my citizenship and residency. So that was an exciting element and indeed to have the UK data protection office agree that the results were not adequate.

And this is so fundamental to not only getting data but making it a meaningful Act that the organizations that possess data have a duty to not just provide it but to provide the adequate context around it and then for algorithmic decision making and large language model, these new developments that didn't really exist as such when we were doing Cambridge Analytica pose similar quandaries to the question of access and the meaningful dimension of that access that is, is it even a meaningful exertion of rights? That is, can you get the information that then is truly an expression of the right? How does the data, when you literally look at it and analyze it, is it meaningful yet?

And then indeed the idea that in the case of the Cambridge Analytica story, we successfully could have established that we had a right to know, but because of the specifics of the liquidation and insolvency and bankruptcy, we were stymied in getting it and the regulator was obstructed by that, but that was quite particular to the Cambridge Analytica companies, which were relatively small and could be easily unwound.

We now have a case where Max Schrems' outfit is pursuing OpenAI under a similar pretense that specific users have requested data and have not been satisfied with the results. So now let's get into the new case a little bit. Based on your understanding of it, and I have a very superficial understanding of it, am I correct that it's using the same principles of access and challenging AI to produce more meaningful results and OpenAI won't be able to declare insolvency if they can't comply? So how do you see this rolling out from here and how can you further unwrap the case itself for the listeners?

Ravi Naik:

Yeah, I mean it's like most things that Max and his organization do, it is very interesting and it's very much at the cutting edge of a lot of thinking around data protection and how to enforce the rights under the GDPR.

Now, I also only have a relatively superficial understanding of the case because the documents that Noyb put out are the complaint redacted without any underlying correspondence or engagement they've had with OpenAI. But it seems that you're right in that one of the aspects is access. So they're asking OpenAI to provide the data relating to the individual to provide access to that individual, but also to rectify or at least correct an inaccurate date of birth that has been assigned to the individual. So there's a duality to the case, but there was a duality to your case. Your case was about access, but you also tested the legality of the processing. You said it wasn't lawful to process data in the way that Cambridge Analytica were doing.

Now in the Noyb case on, again, we don't know the name of the individual, but it seems like this duality is very interesting, what it means for the reality of data protection up against the reality of AI. Because effectively what they're trying to do is to use the accuracy principle to change output data and what OpenAI seem to say in response is, well actually we don't need to rectify, we don't need to put the correct date of birth in place. We'll just block any information related to the data subject. Now there's this very interesting quandary at the heart of that as to, is that enough? Is it okay to say we're fulfilling the right to have... well, the right the individual has or the principle of accuracy by erasure, not rectification say, which is really interesting.

I've tested this with some of my colleagues and it's very interesting that my team had very different thoughts and approaches to all of this as well in that, can you fulfill it? Is there a disproportionality element involved? If OpenAI say, "We just can't do that because we can't predict or engineer the output data in that way," is it actually disproportionate therefore to expect OpenAI to do that because you'd open some sort of floodgate, would effectively make AI impossible to be deployed in Europe.

Now it might be that it should be impossible to be deployed if you can't guarantee individual's rights and all these kinds of thorny questions are going to get more and more to the front of things I think as OpenAI and AI generally gets deployed and becomes a more common tool even inside whether that will actually happen because I think there's a lot of discussion about whether AI will meet the hype about how people say it will be used.

But I think that what you're going to find is increasing litigation but not just in Europe. I think there'll be increasing litigation in the US as well as you see with the Scully Henson case and the use of her voice or not as it might be, as well as copyright issues, as well as group actions you see in the US about underlying use of data. And there is this fundamental question about is the public internet a public good or is it something that individuals have control over?

And obviously you have this contrast between the US and Europe. In the US you have class actions about the misuse of that information to create the training set and the input data rather than the output data. In Europe, the class actions are less common. And you have this question about, you have this strong data protection regime, but is that really going to be focused on output data or on input data? Because the data protection regime in Europe is really focused on the process. What is the processing involved and what is the remedy you seek to where there's an inaccuracy or unlawful processing?

So I think the summary answer is we're at a really fertile ground for litigators. Our own caseload includes a series of cases related to I think AI is maybe a bit of a misnomer in the sense more broadly just automated processing, automated decisions, algorithmically based decisions. And I think there must be teams of lawyers across the country, across Europe who are dealing with automated related cases. So it's going to be an exciting time to be thinking about this space and thinking about how individuals can empower themselves during this automated digital era.

David Carroll:

It sounds to me from my non-legal amateur mind to be echoes of the Google Spain case and the right to be forgotten, the challenge of individual and regional rights of information as that overlays across global differences. So the idea is you can exert your right to be forgotten in one nation and if you go to google.es you are forgotten. But then if you go to google.com, you are remembered, similar kind of echoes that we may be seeing in the synthetic media age where machines can process data to then produce new medias, texts, images, and sounds which are wholly synthetic and then where are the boundaries therein?

And it also seems to be potentially a more recent echo of an initial dispute with I believe it was Italy and the suspension of OpenAI until a certain right of access or something was provided. Can you provide some background there-

Ravi Naik:

That's right. Yeah, so I think this was at the advent of the, or I say the advent, and deployment of early ChatGPT when the initial hype and when OpenAI and ChatGPT was the zeitgeist. When it was deployed in Europe or when it was released in Europe, there was, well, no effective privacy notice. There was no effective way for individuals to assert rights to now have an OpenAI privacy notice that delves into some of this stuff. It does tell you about the rights you might have to access as well as some of the rights you might have to, for example, object to certain types of processing as well as rectification of certain data. But they have a specific part about rectification of inaccurate data. And here is where I think your comparison to Google Spain and the case there is very apt. I'll give a bit of context to that case and how that now maybe maps or does not map onto the ChatGPT OpenAI example.

So the Google Spain case was a request by an individual to deindex a search result against his name. The jargon-free terms that meant, can you please remove this result against the search of my name? The individual said, "Although this does relate to me and although it is an accurate result, it relates to criminal matters which are now old and I have a right to rehabilitate myself and for my reputation to move on as time moves on." And eventually this case got to a European court justice and the European court justice say, "Yes, that is correct. You must have a right to move on. Your reputation has a right to move on, your life has a right to move on." And as a result, Google implemented this system where they do deindex results and they have this constantly evolving timeline and individuals can make requests for their results to be deindexed as in go down to the bottom of the results page, page a million of the search results against your name.

Now, there is a very clear difference here. Let's say you make a request to OpenAI to be deindexed, to be forgotten as it's more commonly known, OpenAI doesn't maintain an accurate or inaccurate record of the output. So let's take the date of birth example in the NOI complaint, they don't maintain that record. It's not something they can internally rectify because it's an output. The inaccurate processing happens anew each time somebody queries ChatGPT. And actually you might get a different answer each time. So can you actually ask for rectification for future hypothetical processing? Maybe quite unclear, maybe this is an aspect of generative AI era, which the GDPR is not actually very well-equipped to deal with. I think the difference is static processing like you have with Google search results and a more dynamic form of processing in the form of ChatGPT and the outputs.

So it's going to be really interesting to see how this gets dealt with by the courts and the European data protection Authority, and the European courts and I guess also the American courts because I can imagine similar things being brought there, but this static dynamic, dynamic, sorry, static and dynamic processing and the differences of the outputs, it is a new era and there are issues that the current framework doesn't properly address or have the capacity to deal with. And it does give rise to questions of, do you need new regulation?

David Carroll:

So before we potentially imagine if we need new regulation, let's stick with imagining wholly speculatively how the technology might continue to evolve when legally coerced to become more compliant to the principles of the GDPR. And I think one of the things that we should take a thought exercise is, okay if this is artificial intelligence and if it is to be as vaunted as Sam Altman says it will be then shouldn't it be smart enough to understand the European data protection regime? Shouldn't it be smart enough to be as clever as a solicitor and with all the knowledge and shouldn't it be able to respond to data subject inquiries? So shouldn't AI be smart enough to listen to a user's data protection requests and then act upon them lawfully?

So for example, if I were to go into ChatGPT and say, "Tell me all the data that you have processed about me," shouldn't it have to respond in a legal manner? And if I make a request to say, "I don't want you to be able to say these things about my... I want you to keep these things about me confidential," shouldn't it have a legal obligation to abide by that? Couldn't AI adapt to the requests made by subjects and that becomes part of its functionality?

Now, to really get into this, of course Ravi, we would need to invite an AI engineer to reflect on the technical challenges that we're speculating on. But we can do that in another conversation once we've fleshed out really what we mean by could AI in a way natively become GDPR-compliant because of its intelligence, precisely because it can grasp the issues?

Ravi Naik:

Yes, I mean it's a really good question. You would effectively be training the data set to provide an answer that suits you. Now I would say two things arise from that.

If you allow that level of engagement, what's there to stop anyone from changing any information about anyone else? At what point do you have that leak through? It's a bit like Wikipedia and having controls. At some point you might need some sort of mediator. So I imagine that OpenAI would have some difficulty allowing that to happen because you could make it say anything you want it to say. I could say that I'm six foot five and I play for Stars and I'm the best lawyer that's ever lived. I can make this anything I wanted. Essentially, I don't think that's what anyone wants to use ChatGPT for. So I think there is a interesting dynamic at play there. Even if you could, would you want it to be able to do that and would the company that owns the proprietary model want do that?

There's probably two other issues I would say. Number one is this open question about whether it complies with the GDPR or not. This idea that this dynamic output, if it's a different output each time, let's say you did train it, and you had the right to tell it what you want to say about you, but then it changed because of the publicly available information about you, particularly someone quite public, if you are trying to input and feed information into it, but then the public information says something quite to the contrary, at what point does that dynamic play off?

And secondly, or sorry, thirdly, isn't there a related issue about at what point does your rights start to bite? Where do we want the law to bite? If you have GDPR, which is about processing and processing of your personal data, if there is new processing each time there's a request to provide an output on David Carroll, how are you going to square any inaccuracy with each output and the dynamic nature of that output?

And it does give rise to this question of, well the AI Act itself in Europe talks about appropriate levels of accuracy, whereas the GDPR talk about different level or standards of accuracy and how does this all square together? And what I think you probably need regulators to do or politicians or legislators to do is step back slightly and think, "What are we trying to solve for? What's the mischief we need to address? And is it just one-to-one dynamic between an individual and the large language model like ChatGPT or any other kind of AI? Or are there just more fundamental issues as you say, if this stuff just doesn't comply with the GDPR, how do we make it comply with the GDPR? How do we empower individuals?"

Now, I think maybe later on in the discussion we'll get into one of my projects where we are trying to engender, we've been instructed to engender rights for individuals over automated decisions and AI systems and algorithmic decisions related to that output and how certain industries are trying to flip the dynamic on its head to say, "Actually, if we're going to create this stuff, how do we put people at the center of it? The people whose art we are ingesting, how do we give them rights and agency?" So there's maybe a non-legal answer, which is just involvement of the people that the data relates to.

David Carroll:

Yes. So this question of not knowing whether when you ask a AI prompt for something and the result, you don't know if the result is either predictive text or trained text, meaning we understand that the large language model is essentially successfully predicting the next word in the sequence because it has a model of language that exceeds the human brain's capacity to contemplate so that therefore it makes sense to us, even though in most cases the models are not making sense, they're just accurately predicting text. But occasionally the models are revealing trained data, but it's not indicated as such.

Do you think that there's any legal boundaries in terms of companies having to disclose the output categories, meaning, or to ensure that trained text can't be output, that only synthetic text can be output? Do you think that this is one way that litigation and decisions and et cetera could find a boundary in this technology?

Ravi Naik:

As in be able to understand what the predictive output for certain individuals might be and having them have some foreseeability?

David Carroll:

And also I guess I'm inferring that there would need to be maybe some transparency to the end user that some information is synthetic and other information is non-synthetic that these words were predicted and these words are not predicted, they're sourced.

Ravi Naik:

I see what you mean. As in watermarking, effectively, watermarking the output?

David Carroll:

The source that there would be some... you would have some distinction between sourced material and synthetic material. And it could even be down to on a word-by-word basis, like a sourced word is in blue and the synthetic words are in black that they can-

Ravi Naik:

Right. Or maybe with a hyperlink underlining the blue text.

David Carroll:

To go back, again, here's where we definitely need to invite AI engineers to the conversation to tell us that's preposterous because XYZ, but indeed the law may ask for such solutions is what I'm suggesting.

Ravi Naik:

I mean, it is a really good point. Could you require transparency of the use of AI effectively and how it's used and what the output is based on the modalities of transparency to help you understand what you're engaging with?

David Carroll:

Something like that, yes.

Ravi Naik:

So I would assume the problem with doing that is, I can see the attraction of it, but I assume the problem with it is a fewfold.

Firstly, if you try to apply this modality of transparency and output on every AI model, it might make a lot of the AI unworkable. It might lead to friction in the process, which then makes the whole thing redundant. That's maybe a commercial point, which is why you look at the AI Act in Europe, it's focused more on certain types of AI or certain types of uses of AI.

Now, I could give you maybe a better example of where this might be quite a very strong case for this rather than just outputs about what's happened with the football scores or whatever. Let's say political use of information, so political adverts. If political adverts are being made by political parties or political campaign material being made by parties or those they've instructed, that material to me should be watermarked, both as having a source and as being authentic or inauthentic. I think that's a very good case for having that kind of transparency around the output. But I think having that case for every type of output might become slightly unworkable. But it's very interesting. I can see arguments each way.

David Carroll:

That's fascinating. So in terms of here we are in the summer of 2024, if I can think back, it was the summer of 2016 that was the tumultuous political era that gave rise to the election of Donald Trump and it was indeed chatter about Cambridge Analytica during that time that piqued my interest in the company and then of course at the end of 2016 was when I was convinced to request the data that got this process started. And so here we are, two US election cycles later, I see Nigel Farage is back.

Ravi Naik:

Yeah, he is.

David Carroll:

Everything old is new again somehow. And we are again going to have a nail-biter of an election in the US but this time, instead of Cambridge Analytica in the political battlefield, we have artificial intelligence. We've already seen a case in the United States where there's been an indictment against a person operating out of Texas who created a synthetic voice of President Biden and used that in the primary in New Hampshire. So we have a first indictment of AI impersonation to confuse voters and to create specific disinformation around electoral information.

One of the most narrow places in the US where speech can be misleading the voter about voting information is one of the few areas that the US is willing to clamp down on in terms of the First Amendment. So going to be a very interesting case-

Ravi Naik:

Indeed.

David Carroll:

To look at and how immediate the abuse of AI occurred here in this election, but also quite heartening that retribution has occurred quickly enough to deter other actors from jumping onto this game. So that is an interesting development. We'll continue to be watching.

Certainly this past semester in the class that I teach with Justin Hendrix at NYU, Cornell Tech with faculty at Columbia, Brooklyn College, and my own new university called Tech, Media & Democracy, we had many students who themselves are from around the world, very interested and concerned with the abuse of AI and particularly synthetic media in elections.

And in many cases students were interested in developing database trackers to capture instances of the use of AI in elections because they anticipate to be a growing practice and problem. So indeed the Cambridge Analytica story will continue as these abusive technologies find their ways into the electoral process and voters find themselves trying to assert their democracy and their associated rights against all odds.

Ravi Naik:

One really interesting thought, which is the beauty of the law lies in the ability to have accountability and liability for people that preach the law. Now, we've talked a lot about these hypothetical situations when the law might apply or where you might need new law or where the law is a bit of a gray area, but all of this becomes slightly jarred when placed against the reality of bringing cases. And the reason that your case was so rare and so heartening, but so also alarming was just the direct cost of bringing action. If you are the victim, if you walk into somebody's restaurant and say, "Right, just trying to get access to the information," the cost of access to justice is prohibiting. And actually there is this underlying assumption that the rule of law, and because we're a Georgia democracy, everyone can access the courts.

But there is this inherent problem. I think a lot of the liability for these big tech companies, there's quite a small amount of companies, they're almost a monopoly, the problem is even if they are acting unlawfully or lawfully to test those issues, to test these novel precedent-setting issues requires a bit of a leap because there's not much basis for the laws, new technologies, applying laws, new laws against new technologies. And that requires a level of risk appetite and a level of cost appetite, which very often doesn't get talked about when we talk about law. And I think it's really important that when you think about how these companies are going to get held to account and whether they're going to comply with the law, you think about whether regulators, individuals have the capacity and the means and the resources to bring these actions.

David Carroll:

So this brings us back to the beginning and role of Max Shrems as this actor in this space who he seems to want to take on the risk of certain of these actions.

Do we need more outfits like his, like yours, AWO? Do we need more of these kinds of unique manifestations of legal agencies to contend with the risk mitigation problem that you were just describing?

Ravi Naik:

Yeah, it's a really good point. So I think the way the Max Shrems' OpenAI issue has manifested itself is not a court case, but it's a complaint to a regulator. So we do have in theory this layer of protection, which are regulatory authorities across Europe that have a statutory mandate to protect people's information.

Now what you tend to find is those agencies have limited resources, limited technical understanding, limited appetite to go for big tech companies because of how powerful those companies are, how they can bog you down with legal procedure and so on. That's why you see very few actual enforcement remedies from regulators. So while we in theory have this layer of protection, it's a difficult thing for regulators to try to enforce every breach of data protection regulations because so much of our life is digital now, so much of our life depends on data and companies that use large quantities of data. So Max is definitely trying to push things forward, but he's often in the line on regulators as well as the course where he can. So that's one interesting thing.

And thank you for also reminding me to talk about my own agency. So we set up AWO, and I guess in the wake of our work together and what came out of this case and AWO was set up to be a home, a platform for people to be able to assert their data protection rights through legal cases and complaints to regulators and so on. But also to try to have a platform to foster greater compliance, understanding policy work to do with data protection and developing technology. And we'll give you some quite good examples of some of the work we've done, which maybe differs from the more direct litigation that Noyb does.

So we do do litigation, we're doing a case against Meta that relates to how far you have a right to object to being profiled for direct marketing purposes to being... basically the business model of Meta is to profile each individual, understand your likes, your dislikes, go to direct marketing and messaging to you. We filed an objection request to that and they have tried to resist that and we've taken that to court.

We're also suing remote gambling companies for how they use information and how that information has been shown to relate to individuals' potential to be addicted to online gambling and the effects that's had on individuals. So we're bringing action for the basis on which those gambling profiles were created.

So there's some interesting litigation work, but the other type of work we're doing that I think is of really important interest to you and the work we've done together, so I'll give you two examples that relate directly to AI.

One is we are currently working with a arts institution and they are trying to take choral voices of choirs, the voice of choirs, recording a choir and taking that recording, putting it into an AI system, and then using the AI system to have a public exhibition where people can use the voices and the AI to create their own choral concerts. Fascinating. What's really fascinating about it, it's a really fantastic exhibition, what's really fascinating is the art that the organization that are running this. It's Serpentine Arts Technologies in London. It's one of the main museums in London. They've engineered this Future Art Ecosystems project where they are recording choir singers, recording all of the choirs singing with two well-known artists who use AI. They are going to feed the choral singing into an AI system and in the public exhibition they're going to play the AI's output of the recording as well as allowing members of the public to create their own choir songs from the AI system. Really fascinating project.

But what's really amazing about how the Serpentine have approached this and how the Serpentine have thought about this is they are effectively sitting in the shoes of OpenAI at the point of the foundation model. They're creating the foundation model and they have thought to themselves, not just, "This is great, let's do this thing," but, "How do we empower? How do we give agency to these things? How do we create an almost real world example of how to do AI with humans in mind?" And they've come to us to say, "Can you help us create a mechanism, the architecture to bake in individual rights into the AI process?" Now that is such an incredible example of how the development of these products could be different and the development of AI or sophisticated systems could have humans and human interests at the core. These can be human-centric systems that support innovation. Human rights can support innovation and here's a real world example.

I'll give you a second example...

David Carroll:

Wait, I still have chills from that one. Not only because of course choral music tends to give you the chills, but just the poetics of this case as well just the idea of the human voice as such an elemental thing that you can own, it's so unique-

Ravi Naik:

Absolutely-

David Carroll:

To you and goes out into the world and then becomes, in a chorus it mixes together, it's this, greater than the sum of its parts, but it is based on individuals.

And then the idea that you can even get religious about it, just like that the music creation is more than human and will an AI achieve the sublime, even with human...

Ravi Naik:

Absolutely-

David Carroll:

I mean you can see the poetics there, but it can only be done if it remains essentially human. So it's so fascinating that this project has been incepted rather than an afterthought of compliance.

Ravi Naik:

Exactly. It's not a retrospective fit. When we talked earlier about, well, you've got these problems with the output data not really knowing what is going to be said, that lack of predictability and so on, actually putting humans and the effect on humans at the start of the project, it really adds to the innovation, it adds to the system. It shows how these things can be done differently. And I'm really excited to be part of it. I'm really thrilled that we've been selected to be the legal advisors on this. And I think it speaks to this burgeoning interest in making sure people are part of this system, making sure rights are respected and actually have that, say, net benefit and a positive to allow people to have their rights and their own agency about how their information's going to be used.

So really, in theory it speaks to the work AWO is doing and the reputation we have, but also that the wider public care about this stuff, and it's not just for lack of a better term nerds like me speaking to nerds like my colleagues, but rather it's people who really care about humans and the wider impact in society. So I'm really glad we've got to that tipping point.

David Carroll:

You were going to talk about a second one before I talked about The Chills.

Ravi Naik:

The second matter that I thought is quite interesting and illustrative of the work AWO do, I'll maybe give one more example after this as well just to tie it all together.

So we were instructed by the Ada Lovelace Institute, who are a think tank in the UK looking at digital policy, to do a legal analysis of three hypothetical scenarios in which AI could be deployed and map where existing laws provide protection and then say where the gaps might be that the automation comes in. And that was really illuminating because it allowed us as the AWO Institute to say, "Well, actually there is quite a lot of protection here and there are clear ways you can protect people where the gaps are."

So we have done this mapping exercise in quite a unique way. It's available publicly on our website, on the AWO website, and it really shows very clear concrete examples, for example, in the employment context, the financial credit worthiness context, it shows you real examples of how these things come in and how you can resolve issues. So I'm really glad we'd be able to think about that kind of Meta level example.

And the third level is we start working with unions, quite large national unions to help both address how workers, labor should think about AI, but also how they should assert rights over automation that's increasingly coming into the workplace to monitor work patterns and so on.

So we're really lucky to be working with a range of audiences to think about not just data protection but AI and the wider future that we all find ourselves in and the way society's going to develop. Really, I feel quite lucky to have this agency that can work at the forefront of developing technology. And that all really stems from, David, to you instructing me all those years ago on a hunch about what Cambridge Analytica were doing.

David Carroll:

It's so fascinating to see how it's branched out into all these essential areas, whether it's the essential human creativity or the basis of labor.

Ravi Naik:

Yes. Yes. The labor stuff is really interesting. What is the future of work?

David Carroll:

Wow. Well, Ravi, it's been so wonderful to reconnect about the current state of affairs and the evolution of the journey of the conflict between data, its use, abuse, exploitation, and the assertion of our control over it, a epic struggle of the ages.

So great to hear what AWO has been up to. I didn't even realize all these interesting projects were going on. So thank you for sharing and helping to put the context of the Max Schrems' work into the context for the North American audience where because some of these big ideas aren't a given across the pond, it is important to recontextualize them because indeed we still are a country where basic data protection and privacy rights are not enumerated and are not enshrined and are always at stake. But at least there has been activity at the state level-

Ravi Naik:

I see-

David Carroll:

And at least we have seen some pretty interesting bills circulate at the federal level. So indeed a lot's at stake with the upcoming election, not only with democracy itself, but I would say data protection and privacy and AI rights are also at stake in November here in the United States and beyond. So we'll continue to see what's brewing. And, Ravi, thanks so much for talking with you today-

Ravi Naik:

Thank you, David. It's been excellent. We can spend many hours doing this. I'm glad we've got so much in the hour we did have. Thank you so much for inviting me and hopefully we'll speak again soon.

Authors

David Carroll
David Carroll is an associate professor of media design at Parsons School of Design at The New School. He is known as an advocate for data rights by legally challenging Cambridge Analytica in the UK in connection with the US presidential election of 2016, resulting in the only criminal conviction of...
Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & Inno...

Topics