Home

The Future of Privacy in the Age of AI

Justin Hendrix / Jul 21, 2024

Audio of this conversation is available via your favorite podcast service.

It goes without saying that privacy and the creation of laws and regulations around it are fundamental to determining how we will live and work with technology, and whether technology operates in service of democratic societies or only in service of governments and corporations.

A couple of weeks ago, I had a chance to speak with two leaders from the Future of Privacy Forum (FPF)– Jules Polonetsky, its CEO, and Anne J. Flanagan, the head of its new Center for AI. We discussed the recent US Supreme Court decision to overturn the Chevron doctrine and its implications for privacy legislation in the United States, the fierce battle over privacy laws in the US, and potential conflicts between Europe's General Data Protection Regulation (GDPR) and the new AI Act. And, we talked about how the 15-year-old Future of Privacy Forum envisions its role in the age of artificial intelligence.

Below is a lightly edited transcript of the discussion.

Jules Polonetsky:

Jules Polonetsky. I'm CEO of the Future of Privacy Forum.

Anne J. Flanagan:

I'm Anne J. Flanagan. I'm vice president for Artificial Intelligence at Future of Privacy Forum.

Justin Hendrix:

I'm excited to have the two of you on the podcast and we're going to talk about a range of things including this new center for AI that you've launched. But I want to start with this recent Supreme Court decision which overturned the Chevron doctrine and how it might impact the ability of federal agencies to regulate technology, particularly when it comes to AI and privacy. How are you all thinking about this in this new legal landscape we woke up in last week,

Jules Polonetsky:

We have been polling some of the leading legal experts and have invited a number of them to calls and conversations to really try to anticipate what it's likely to mean. I don't know that we know what it really will mean. It certainly means that courts who disagree with agencies using their rulemaking expansively are going to be far quicker to say, no, that's not what Congress meant. And no, we don't need to defer to you the agency expert in some very broad way. We get to second guess and we get to say that this is what Congress meant and you got it wrong. And so to the extent that we've got the FTC in the midst of a very significant privacy rulemaking, we've got the FCC doing a range of rulemaking that's relevant, challenging the agencies in court if you are on the other side of one of these rules was quite tough.

The courts said, it's not our job to second guess the agencies other than when they really do extreme things. They're far beyond their authority, as long as they are doing things that are reasonably within the meaning of the statute, it's their final world and now it's not. So now, look, if Congress anticipates this as they should, they will likely be much more restrictive and precise in how they legislate. And the challenge for Congress when it comes to privacy, when it comes to AI, Congress, I think is always trying to strike a balance between really giving the FTC lots of very specific direction. They don't want the agency to go too far, but also recognizing that they don't actually have the answers and they want the FTC and the expert staff to really do the details. The European Commission is an interesting example. GDPR is a very detailed document, even though the courts are spending a lot of time interpreting it today, you had years and years of technical legislative experts writing and crafting and articles that introduce the actual operative sections because of the culture that the expert gurus who pass these things along put all the detail in.

And I think in the us, in some areas of legislating, we've had that very technical ability, but that's not an ability that Congress has had in recent years. Hopefully we'll get the congressional, sort of the Office of Technology assessment back perhaps. Hopefully Congress will feel the need to have the capacity. So they don't just throw it to the ftc. I mean, it's fair for Congress to do this work. They just haven't had that technical capacity. On the other hand, providing a little bit of rulemaking leeway gives the agency some way to futureproof it, right? We don't know how this will play out in giving the agency the authority to see and to move on rulemaking when legislation is such a clunky, slow, politically fraught process.

Justin Hendrix:

Just with regard to privacy especially, I don't think anybody would sit back and say, well, Congress has been effective and timely with regard to how it's updated privacy legislation. Does this set us back even further?

Jules Polonetsky:

It took seven years to pass the GDPR. Anne was in Europe and closer to that process at the time and may weigh in, but it took seven years. You're regulating the entire US economy. I think it, and I appreciate that people want it quick and they're worried about the various harms and they're worried about the states, but we're regulating much of the US economy. What I've been worried about is that we haven't had a clear path. We had President Obama's proposed bill, and then Congress was going in another direction and the house wasn't talking to the Senate and there was a three corners deal and a four corners deal. You had no clue what it was going to look like. And I think today, even though the A PRA is likely to get lots of permutations, the civil rights language is unfortunately out. Hopefully it'll be back. I think we're starting to see the parameters of what this document looks like. So I was worried, and I do worry when I don't see anything happening. I do think that what we see now with several cycles already with a document that is evolving, we're on our way, we're on our way, and I'm optimistic that the staff, staff have to learn. They have to hear from stakeholders, they have to see what people say is working. They have to get the feedback.

So I've been worried that there wasn't successions of smart staff and members iterating on a document over years. So I'd like it quickly, but I appreciate that regulating the entire US economy because data is so critical to so much of the economy should be done carefully and thoughtfully with a lot of poking and hammering and banging our brain.

Anne J. Flanagan:

Jules mentioned the GDPR. I think Europe has the two advantages that the United States has to struggle with. Two problems if you'll in the us one is preemption and the other one is the private right of action. So in Europe, preemption is solved for because countries have collectively decided to seed certain competencies to the European Union. And data protection was one of those areas what was happening around the GDPR, and not a lot of people talk about it, but one of the forcing functions was the fact that countries started to move on their own. So you ended up with this patchwork, this disparate approach, and it really put the European Commission in a position to get consensus around harmonizing that because it was just becoming too difficult for businesses, particularly because back when you go back to the sort of the 2010 era, GDPR was a agreed in 2015 and came into force in 2018.

So you're going back quite a way at this stage. You really were in a world where the internet economy was taking off at a day-to-day level for folks. So that's really where GDPR was able to step in and solve that gap. And there was a data protection act that predated it as well. So it wasn't starting from zero on the private rite of action piece. That is technically a thing in Europe, but not really. Europeans are much more, they really, really look to top down. They really want the regulatory authorities to take that approach. In the United States, you have this combination approach. So there are some nuances that look a little bit different, and I think it's going to be interesting to see how state approaches have evolved. So for example, I think one of the reasons that A PRA is on the table is states like California have moved so aggressively in recent years, and we're starting to see some of that patchwork approach as well. But I think there is some very real differences in why it played out in Europe the way it did versus the US and how that potentially influences things going forward.

Jules Polonetsky:

Well, it's not technically preemption, but at the end of the day, Europe did decide that a national law does replace the authority of states to take a different path in areas where the GDPR is clear about what the maximum standard should be. And we've seen courts strike down efforts to sometimes be more restrictive or not.

Justin Hendrix:

Just to stay with Chevron for just a moment. And you mentioned the idea that maybe Congress will have to reinvest in its own ability to do technology assessment. Are there other things that you think lawmakers, policymakers will need to do to be able to be effective in creating legislation around technology?

Jules Polonetsky:

I was a congressional staffer and the committee, the committees that were most effective were the ones that had staff with long tenure with members who had either long tenure or stable turnover, and they learned these issues in a substantive way. They learned who the stakeholders were and where the paths forward were, what the tensions were. They knew who was lobbying them with baloney and who was a credible source if you wanted information. And the challenge congress has today is that there is a lot of turnover. A huge amount of time every year is spent on social media and fundraising and racing around on non substantive matters, social battles. And so experts, staff that put their career and expertise towards the tax committees we can like or not like what they do, the tax committees can't afford to have people who don't really know what they're doing or you just can't effectively amend the tax code and achieve your objectives.

The congressional budget office, when Congress really has to have, I paid qualified senior people who need to stick around to actually craft sophisticated technical language. They do it in too many other areas. The turnover, the politics, the drama, make it hard for that sort of institutional staff to be there. European Commission, Singapore, the senior executives in the Singaporean government are paid like law firm partners, right? Singapore has said, we're a small country. We have some unfriendly neighbors. We don't want the best and brightest just being lobbyists or practicing law. Why shouldn't government have the benefit of people who are elite experts? And so as we do more and more global work, it's super impressive to see where and how governments work effectively. And again, we still might like their politics and the positions they take, but when you are dealing with people who are experts in the substance, even say China, where we may really be unhappy with the model of government and so on and so forth, in drafting their data protection law, they looked at the us, they looked at Europe, they looked at jurisdictions around the world, they looked at the critiques and the criticism, and you see the influences of other legislative models in the us very often we think we shouldn't invent it ourselves because we're always the smartest and we come up with ideas and we don't learn sometimes from the successes or failures of other jurisdictions.

And so one of the things that has been critical to us as we have globalized, we've got teams on the ground now in Nairobi, in India, in Singapore, in Japan, in Europe, Spanish speaking staff covering South America. And we're doing it because A, there's a lot of data protection activity around the world and we need to keep up. And our stakeholders and members and academic friends and partners need to keep up with what's going on. But we also want the opportunity to sort of leverage, oh, here's how Japan did the identification in its law. They really thought about it. Or here's how exceptions for research are handled in this country. Let's learn from folks who have some experience GDPR, what's working in law now for a number of years? Where are there real tensions where we could do better? Where can we learn? Where do we want to depart?

But we don't see enough of that. And that's a lot of what we try to do. We don't lobby. What we try to do when we engage with policymakers is we say, well, how can we help you? What are you interested in? Because if you're interested in AI regulation, well here's how Europe did it and here's what people like about it. Here's what others don't. You're interested in, well, here's how Korea did something a little differently. Where can we bring, oh, and here are these academic articles that are really relevant that you may not be aware of. They might be behind a paywall. Or we do an annual event that we call privacy papers for policymakers where we have a competition for the academic papers that policymakers should be aware of. But hill staff don't have time to go read all the privacy literature. And so we pick six, seven papers with a jury that goes through 'em and says, these are things that really mattered. And then we have the authors come and present them. So that cross pollination is something we're super proud of and it's very much how we spend a lot of our time.

Justin Hendrix:

I assume that is part of the DNA for what you have to do with the Center for AI, that you'll be looking at artificial intelligence with that same mindset.

Anne J. Flanagan:

I think that's exactly right. I love the way you said part of the DNA of the center. I think the center is a really interesting endeavor for FPF as we were scoping at its remit, we really wanted to ensure that it was fully part of the blood of FPF, the DNA of FPF. Like you say, that we really are taking an approach that is congruent with how we have been approaching privacy and data practices for the past 15 years. We've been working on AI issues, even though the center new, we've been working on AI issues for the past seven or eight years, and I've had some predecessors that have led smaller teams. But at this moment in time, as we start to see every aspect of data touch against ai, and we do have all of that multi-stakeholder collaboration and truly as multi-stakeholder and global across FPF, it's really time to, I guess collect that together, make it more seamless for our stakeholders and our stakeholders, include government, private sector, academia, civil society and other experts, and really help folks to navigate the information that we have.

I like to, just building on what Jules was saying there about how we help folks find information. I really like determine as pulling back the curtain to let folks understand what is happening around AI policy development and thinking right across the world with our expert hats on us and our community of expert hats on what do we think are some of the most salient and relevant things that are out there. And then folks can make up their own mind as to whether that is something that they do agree with or not. But we're very, very research focused, very evidence-based in everything that we do. So when we constructed the center, it really is around, I think in the first instance, collecting all of that bread and butter of fpfs work around ai. We are able to put more resources behind it and expand it. And we are shortly launching a very large scale initiative around helping companies to conduct impact assessments around ai.

It is something that everybody is struggling with. Some larger companies are doing a terrific job. They're way out in front. They have their own best practices that are out there. Certainly smaller companies or companies that are maybe not AI first, if you will, that are more AI curious, they may struggle with what does benchmarking in this industry look like? What does benchmarking across every sector look like around ai? And then policymakers ask us what's happening out there? Academics inform us around here's how things could be better. So really when we look at the center, it's around grounding it in that multi-stakeholder, global evidence-based approach to really showcasing the best of thought leadership out there around AI.

Jules Polonetsky:

Right now, we have a project around schools. How does a teacher or school principal or the IT person do a complicated AI assessment? But all of the tools that are being pitched to schools today for personalized learning, for grading, for language discovery, are all labeled as AI powered. And so we've got to think about, yes, big organizations and the LMS and that sort of large scale thing, but this stuff is in every piece of software. We fired up Zoom, oh's, AI stuff there. Well, somebody at every organization using AI needs to be like, should we have turned that on? Or do we want that turned on? And how is it using the data of the kids, the teens, the students, the employees? And so one of our goals is to try to put out materials for those who are not going to have a giant law firm building an assessment for them.

We just did a project for the school system on how to do ferpa, kapa, the basic things that every school administrator needs to do when they decide which apps, which technology, which services can be used in schools. They've got to do a COPPA assessment, they've got to do a fer assessment, just walk through like, Hey, some of this is not new. This is just a cloud-based service. You already know that your teachers, your students can simply use free commercial services because they will be violating your obligations under Kapa, under FERPA to not allow data to be used by those parties for advertising or for other commercial purposes beyond supporting the students and so on and so forth. Some of this isn't new. One of the big challenges with AI is a lot of the issues are not new. I was the chief privacy officer decades ago, DoubleClick, and there were ads, and the ads kind of optimized themselves based on who clicked on 'em.

And we didn't call it ai, we called it, I dunno, we called it big data. And many of the issues that are taking up a big part of the attention discrimination scraping data, guess what didn't start happening yesterday, have been part of data protection law for a very long time. So the risk is that lots of people coming in new with some view of regulating AI without recognizing that data protection already regulates even without kind of a national privacy law in the us certainly with privacy laws around the world and with the dozen plus states that have privacy laws now states like Colorado and others that actually have AI laws, but we've actually got a lot of AI regulation, we've got laws about discrimination, and you got to start with that. And then we can say, well, where is that fall short? Are there things about that we don't like?

Maybe we were okay with scraping because we saw it as, hey, this is for searching and this is for interoperability, and now we're worried about how we imbalances and who's going to make the money and are the people creating the content going to be left out? So maybe we want to rethink it, but there is decades of law in place already and what are the new issues that go any further? Where are we seeing challenging issues? Right? Extinction was not on the radar screen when we talked about big data or automated decisions years ago, and now we're talking about extinction. Okay, so maybe, but I think the challenge is to try to build from current law as opposed to reinvent things new that don't match well with current law. I think that's the problem that Europe is going to have, the AI regulation. Before we started, I was talking briefly just back from a visit to the Bordeaux region, and I was just talking about the complexity and sometimes logical and sometimes not logical conflicts between the different layers of wine and vineyard regulation.

And the AI Act was not built, didn't amend GDPR, right? GDPR already has sections on automated decisioning. Gabriela Zanfir-Fortuna, our global lead and her team did a study and looked at all the court cases and all the DPA enforcements around automated decisioning, which is kind of ai. And so my hope was that where personal information was involved, one might've amended built on top of GDPR. So well here, this doesn't go far enough. We need more detail and so on. Instead, what was done in large part is a kind of parallel regulation. Now it says over and over, oh, but this should operate with respect to the GDPR. But guess what? The data protection regulator doesn't actually understand where and how their role fits in. They see their role rightly as privileged under the charter. They're independent regulators who are looking out for human rights and the appropriate balance.

And so is the AI regulator who in some cases will be the data protection regulator. So hopefully they'll talk it out internally, but in other cases it's not the data protection regulator. And a lot of the regulators are seeing this as a threat, as a challenge, as an area of confusion, as an area that's sort of undercutting data protection. And I think that swirl, that lack of clarity is really going to be a challenge. Our annual Brussels privacy symposium this year is focused on that intersection. It will be a tragedy if it's years and years of court decisions before we actually understand the balance of these laws. So I hope that as we do AI regulation in the us, we're able to do it by building on top of privacy regulation. And so what I am worried about, although I appreciate the need for data protection regulation to take its time, I don't see how you do AI regulation without having data protection regulation. If we're debating scraping, well, tell me whether it's okay to collect that data without consent or what basis I need, and then we can talk about what it means for ai. But if I haven't resolved yet what we think about the laws and rules around collecting personal information, how do I do it only in the context of ai? I need the data protection regulation first and then logically I want to build on it. I fear we'll do what Europe did at which point we get this sort of confusing model of conflicting laws.

Anne J. Flanagan:

Yeah, I mean, Europe gets accused all the time of being a complicated bureaucracy and it's highly complex, highly, highly complex. And they do do their best in terms of filling gaps, as Jules was saying, because the AI act is really, it's based on product liability. That's the key that it sits on. It's not sitting on the right to data protection or the right to privacy. It sort of, it's taking those for granted. I think the challenge is going to be, as Jill was mentioning in the core cases, there's a precept to that, which is that it assumes a high degree of cooperation between different regulatory authorities behind the scenes. There's good precedent for that in areas like telecoms, but telecom's regulation is decades old. You're bringing in AI on top of data protection. Data protection is almost like a sacred space in some ways in Europe.

So there will be some administrative challenges behind the scenes. I'm not sure that that model would work in the United States. The picture looks quite different, but I think the point is well made that really when we look at ai, we're definitely not starting from scratch. And if you think about AI from the perspective of the person data protection largely, maybe not completely, but largely has that space covered. And we look at it from the perspective of privacy as well. And then I think it really goes down to what is the risk of the technology? We're at a pivotal moment right now in many respects because we've had this surge of chat GPT, which is over a year ago at this point. It's nearly two years if we can believe that. And I think having heard experts in the community talk for decades about data protection is really important because the scale of data is increasing.

We're into the age of big data and the speed of data processing is increasing. Well, guess what? We've now reached that point where that is a reality for absolutely everybody. The conversations no longer just about harms around social media, for example. And AI will touch every aspect of every technology in our lives in some ways. So I think the challenge then becomes putting the parameters around what is dangerous, what is not? What do harm look like in different contexts and do we need to revisit certain norms that we already have established? Data minimization is one that comes up again. And again, the idea that you only process the minimum amount of data that you need to for the technology. There may be circumstances where it's appropriate or important to produce or to process a little bit more if it means keeping kids safe online, for example, by identifying the fact that you have minors using the technology. So I think there are some nuances that are introduced into the conversation when we bring AI in. But if you look at it from a legal perspective, legal perspective, a lot of it is already there somewhere in law in any jurisdiction in the world. So that's the case.

Jules Polonetsky:

At our AI center launch, we hosted a debate on is data minimization compatible with ai? And we had Omer Tene, a leading privacy lawyer, argue that it was not. And Samir Jane and CDT who have been advancing the importance argue in favor that it is compatible. And although the majority vote was in favor of Omer's argument that it was incompatible, people moved from the beginning of debate towards some years direction, convinced that there was more to the argument than they thought in advance. And so we thought that was what's interesting about this area is that there are legitimate conflicting values, more data, more accuracy, more risk, and there's nothing better than a good, healthy, honest when people start yelling, well, you're only taking this position because big tech small tech competition and this and that. I mean, yes, those things are all there. We should be open eye to the politics behind it.

But there are some hard important issues and how to figure out models. Let's take location data, right? We are all worried about location data. Will it be used to identify that you're gay? Will it be used to identify that you're seeking reproductive services? Is it going to use, is it going to target you with ads? But then we also have the CDC and health equity experts using location data to know how disease spreads, to know which hospitals are successfully serving people from what areas and what ethnic backgrounds. So these tensions and the fact that there are conflicting values, yes, I might want data to better serve a minority population, a poor population, but if that population says, I'm actually feeling at risk because the government's going to get it or it'll be used against me, you can't poo pooh that and say, trust me, because I'm the company, I'm the government, I'm the big player, I'm the foundation, I'm the school. We need to figure out how to, in a nuanced way, shape these norms so society in gains in a way that we think is the most fair way.

Justin Hendrix:

I assume that that effort of trying to kind of mediate between all of those various interests and the different types of conflicts that could occur between industry and as you say, foundations or governments, et cetera, that's kind of what you're doing at the forum when the lights are not on at the event and you're having an open debate. How do you manage the different stakeholders that you've significantly funded by industry? Of course you've got other funders as well, including foundations. How do you manage that tension between those corporate interests and the other interests that you represent?

Jules Polonetsky:

Yeah, so a couple of key thoughts. When I founded FPF 15 years ago, my goal was to find the place that supported being able to act and speak independently. And so our day-to-day Bread and butter is not the heavy duty policy work our day-to-day Bread and butter is helping the senior people, whether it's chief privacy officers or people at schools and universities or data protection authorities. It's helping the senior people in privacy understand what in the world is actually going on. So simply tracking legislation, giving them honest analysis, here's how the GDPR handles this and this and that. Helping those communities deal with the flood of what's coming at them in just an accurate, fair, honest way. We have 2025, so law firm members who are all there, they don't have individual views of their own. They're all making money representing clients, but we support their need to know what's going on.

There's a new bill, what does it really do? What does it mean? So on and so forth. That pays the bills. We have a significant amount of foundation support for some of these bigger picture projects supporting the school system. And we've set up our board, our actual board, my actual bosses are diverse people who do not agree with each other. Olivia Syl formerly of the FTC academic, Anita Allen, but then Alan Raul, who represents companies are the actual bosses, do not include any donors. We don't have any corporations on our board. The people who actually hire me and can fire me are stakeholders who represent some academic views, some civil society views, and some former chief privacy officers who understand what that community needs and wants, but are not employed by anybody. So we really worked hard to set up a structure. Our advisory board includes academics and civil society and data protection authorities, and we don't want anybody to leave in a huff.

So we tell them, we are not your trade group. You got trade groups, go do your business with them. And if you're an activist, there are people out there to go fight and argue. We're actually honestly interested in being the very boring geeks who actually want to understand this incredibly complicated area. And obviously I don't think it's boring. I'm very excited by it. But there's a space I think, to bring together people with different views. So I can't speak for big companies. I got little companies, I got banks, I got schools, I got academics. We don't all agree on the staff and that's fine as well. We're not an advocacy group, maybe like a CDT or an EPIC where there's a point of view. Stacey Gray who's on Anne's team was just here and we were debating, and she and I disagree, you know what?

And that's fine. So we're a little more think tank in an area that I think has complexity now. Sometimes we'll negotiate a code of conduct or best practice and people will sign on to stuff. And then obviously then we got to negotiate and represent the language. The student privacy pledge that almost 500 companies have signed on to where they represent how they'll handle student data. We just negotiated with LinkedIn, ADP, indeed and Workday, a set of rules for how they use and their tools that are used by thousands of companies can be used for AI and hiring. And so there are times where we will go and work to negotiate with a group of stakeholders and get input from academia and civil society so that it's well baked and well received. But most of the time we're out there, we will testify. And so we'll have an occasional point of view, but usually we're there to help educate, help explain, help provide the fact that there is nuance. So we're not as likely to be yelling, this is great and this is bad because there's a lot of gray here. That's what we do. Seems to be working.

Justin Hendrix:

And let me ask you, and maybe we'll kind of get close to wrapping up on this, cause I want to kind look forward a little bit with regard to the Center for ai. You've mentioned a couple of priorities you've got in the near term around assessments. For instance, I know you intend to do lots of other research projects. What else can we expect from the center in the near term? What's on the list that all of those stakeholders have more or less agreed that you should be working on?

Anne J. Flanagan:

Well, I love Jules' analogy that we're, I guess kind of a research service to folks interested in this policy area. From that perspective, think of us as being present at every stage of that policy development and deployment life cycles. If you can think of policy almost as a product, there's the conversations that happen around what should happen around policy. So we'll get asked our opinion by legislators and lawmakers and various different experts. And as Jules has articulated, we will offer evidence. We won't really necessarily share an opinion. We'll say, well, if this happens, this could happen. Or if this happens, this could happen. We're not a trade association that's going to lobby for a position, but we are present early in those conversations and very thankfully trusted in those conversations As things go on and as the rubber starts to hit the road, a lot of chief privacy officers will use us as a forum.

The name forum is not accidental. They will convene and talk to each other and say, how are you approaching this new law? What are you guys doing? How are you actually implementing that on the ground? And they'll turn to FPF and they'll say, can you maybe set up a meeting and let us all talk to each other and we'll figure out whatever our next step might be? We'll happily, happily facilitate that, and we just step back and let them do their thing. So we really are around. We're sort the beginning to the end. When you look at academics and civil society actors, their voices are incredibly important in all of these conversations. One thing that I forgot to mention was that the center is backed by its own leadership council. So Jill's mentioned our board of directors and our advisory board, but we also have a leadership council, which is truly global in nature and is composed of businesses, academia, civil society and governments and data protection authorities.

We have 27 members, some very big names. We're very, very, for their advice, they will really steer the ship around the substantive areas of work that the center will be addressing. So impact assessments really, really top of mind, both for practitioners and for policymakers. From practitioners perspectives. If you are already involved in ai, whether there's a law that says you have to conduct an impact assessment or not, it's in your best interest to do so. It's important for protecting people and it's important for the sustainability of the business. So folks are very, very interested in that space. We're also starting to see a lot of companies that maybe were not traditionally the names that you think of when you think about ai, but they are now introducing AI into the company somewhere. And they're really looking for guidance, not necessarily from FPF, but maybe via FPF, through to academics, other experts and other companies and to governments and regulators.

What's happening, what are the trends here? That's something we do very well is to curate those trends for folks. And then when you think about how policy makers and legislators in different jurisdictions are looking at this issue, is this something they should be legislating if it's happening anyway, maybe they're really interested in understanding what's happening already, what's out there? So we're really able to help folks find each other to convene and bridge that gap in that forum manner. That's really, really a big one on the table right now, and it's something that we're getting very, very strong signal to look at. One of the things that I think we're also particularly well positioned to do right now is to look at how AI is evolving across different sectors. So we have a number of different teams right across FPF, from automotive to healthcare to education youth.

How is AI affecting different sectors? What can we learn from the work of colleagues in FPF already and the different stakeholders that they're working with? So really, really understanding with that level of nuance. And then there are a few other things on the table right now that we will be discussing with that leadership council as well and see how the rest of the year evolves. But suffice to say, with more resourcing and a more centralized approach, we will be taking on, I think AI specific projects is how we're terming IT. Tools and impact assessments is really the first piece of that, and that's a very, very big lift.

Jules Polonetsky:

On July 9th, The White House will be kicking off a FPF effort around privacy enhancing technologies in support of the Biden executive order on ai. One of the important things that the executive order on AI recognized as it seeks to support government agencies uses of ai, but set parameters around it is the role of privacy enhancing technologies. And so we've received some funding from the National Science Foundation to help advance legal certainty for pets privacy enhancing technologies that are used for to manage AI data, but also the ethics of the uses of those technologies, right? I mean, if pets are just going to help big companies have an advantage, or if pets minimize or maybe even eliminate a privacy problem, but I still don't like what you're doing, the ethics at the end of the day, where does that fit in? Right? There are technologists or even companies who think, Hey, I anonymized it.

I got no problems, so why are you bothering me? But no, if you're using technology in a way that feels discriminatory at the end of the day or feels like big players are getting advantage or learn things about populations, even if it doesn't invade my individual privacy, but I just learned something about a population that's going to be used in a negative way, where do you capture that so that we can ensure that pets aren't an enabler of societally negative practices? So we're super excited that the White House is hosting an event to kick that off. And one area I'm particularly excited about, but also worried about, you can perhaps hear from our conversation that we certainly are the people optimistic about tech and data. I mean, I think that's frankly why a lot of people who may not agree with everything we say and do support us or want to engage.

We do think that at the end of the day, ethical uses of tech and data advance society help solve health problems, help solve climate change, help improve things, but will cause all the problems that critics are worried about if we don't have clear laws, strong rules, good technology and good practices around it. So we are an aging population in the US as Japan and as Europe and technology is expected to play some bigger role in helping monitor people with Alzheimer's, helping predict who's going to fall, helping solve health related problems, providing care. We don't have enough healthcare workers. What role will robots play in helping me get in the bathtub in providing therapeutic pets or all the age tech uses? There are thousands and thousands of H Tech companies, many using ai, but you can already see the issues of surveillance and control and monitoring.

And if I want to predict it, you're going to fall when you're in your eighties and the devastating effect that can have. But that means I need data about where you are in your house and all kinds of other factors about you. So we're today talking to some foundations about mapping how companies are planning, starting to use that kind of data. Is the data of seniors even in ai, are seniors the early adopters here so that the AI is even being trained to serve those populations? You know that funny commercial about seniors talking to Alexa and yelling at it and doesn't understand them? We can't have that when it comes to critical home care services. So we're working on some proposals to help map the way age, tech and AI are intending to be used for healthcare and for seniors, and what can we learn from other countries and what policies do we need to have in place so that we don't go down on a winding path here?

Justin Hendrix:

Well, I should hope that we can have a conversation again like this before another anniversary of the Future of Privacy Forum, 15 years in. I would also just like to thank you for your analysts who've shared their thoughts with Tech Policy press readers, both writing pieces, and also occasionally responding to our requests when we need your expert analysis. So I thank you for that. And Jules and Anna, thank you for talking to me today.

Jules Polonetsky:

Thanks to you and the team. You guys have provided a really great forum for real debate and discussion around some of the hardest technology policy issues of the day. It's a real contribution to the conversation, so we always look forward to reading the latest posts and articles.

Anne J. Flanagan:

Yeah, thank you so much. We are big fans of Tech Policy Press, and thank you for this conversation today.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & ...

Topics