Home

Charting the Future of Tech Accountability

Justin Hendrix / May 25, 2022

Subscribe to the Tech Policy Press podcast with your favorite service.

For the past six years, an independent research program at New America called Ranking Digital Rights has evaluated the policies and practices of some of the world’s largest technology and telecom firms, producing a dataset that reveals their shortcomings with respect to their human rights obligations. Ranking Digital Rights evaluates more than 300 aspects of each company it ranks that fall broadly into three categories: governance, freedom of expression, and privacy.

Following the release of this year’s report, which we covered at Tech Policy Press, Ranking Digital Rights hosted a session on Charting the Future of Big Tech Accountability. Nathalie Maréchal, Policy Director at Ranking Digital Rights and a past guest on this podcast, moderated the panel, which included:

  • Sarah Couturier-Tanoh, Shareholder Association for Research and Education (SHARE)
  • Jesse Lehrich, Co-Founder, Accountable Tech
  • Chris Lewis, President and CEO, Public Knowledge
  • Katarzyna Szymielewicz, President, Panoptykon Foundation
  • Sophie Zhang, activist and Facebook whistleblower

Below is a lightly edited transcript of the discussion:

Nathalie Maréchal:

Let's jump right into it. Sarah Couturier-Tanoh is an expert in corporate research and shareholder engagement. She leads dialogues with Canadian and international companies to advance ESG issues, including human rights, decent work, and corporate lobbying. She's published several issue briefs on current shareholder and policy topics using her insight from her background in non-financial auditing.

Jesse Lehrich is a co-founder of AccountableTech. He has a decade of experience in political communications and issue advocacy, including serving as the foreign policy spokesman for the Clinton 2016 presidential campaign, where he was part of the team managing the response to Russia's information warfare operation.

Chris Lewis is President and CEO of Public Knowledge. Before becoming president and CEO, Chris was the vice president of PK from 2012 to 2019, leading the organization's day to day advocacy and political strategy on Capitol Hill and with the administration.

Katarzyna Szymielewicz is an expert in human rights and technology, a lawyer, and an activist. She's a co-founder and the President of the Panoptykon Foundation, a Polish NGO defending human rights in the surveillance society. And one of the leaders in corporate accountability in the EU.

Last, but certainly not least, Sophie Zhang became a whistleblower after spending two years in eight months at Facebook. During that time, she tried, but was not successful in efforts to fix the company from within. She personally caught two national governments using Facebook to manipulate their own citizenry while also revealing concerning decisions made by Facebook, regarding inauthenticity in Indian and U.S. politics.

So as you can see, we have a really illustrious panel here who's been deep in the trenches of corporate accountability from a variety of angles, and I'm really excited to chart the future of our movement together with you all. Jesse, let's start with you. So you're the co-founder of Accountable Tech, which is a campaigning organization working to bring about long term structural reform to tackle the existential threat that social media companies pose to our information ecosystem and democracy. Tell us about what led you and your co-founder, Nicole Gill, to focus on this issue and what you think this movement has accomplished so far.

Jesse Lehrich:

Thanks so much for having me today. And I think that you hit the nail on the head even in your question, when you say an existential threat. And that's really how I've come to view disinformation and the current information ecosystem that we live in, in which there is no shared consensus reality, no shared baseline of facts. And social media, I think has been—certainly not the sole, social media platforms didn't invent disinformation or polarization or racism or extremism or echo chambers, but they serve as a unique accelerant on each of those fronts. And as the fabric continues to fray and we lose that ability to have cool-headed conversations, to have policy-focused conversations, to have fact-based conversations. I think democracy is, day by day, at more and more risk. And so we felt that this was an issue area where I think people are starting to recognize that on every issue where they want to see progress, disinformation and the information ecosystem serve to thwart that because it is this intersectional issue.

It's very hard to win arguments or have a functional democracy or have productive conversations if you cannot even communicate facts, if you can't reach people, if everything is being sort of filtered and warped through a lens of a few dominant platforms, which are built to optimize engagement, which often means amplifying the most toxic things on the platform. And doing it in a way where that's micro-targeted to each person to play on their personal biases. So you have this dynamic where it's simultaneously global and ubiquitous, but also unprecedented in how precise and personalized everything is. And so I think we have done everything in our power since we thought about this and stood this organization up to try to fight on all fronts, because there is no silver bullet to this myriad of interrelated problems. But I do think we've pushed for direct corporate accountability trying to call out and educate the broader public on some of the fundamental flaws that we're worried about with the dominant social media platforms.

We've pushed for legislation and education in the U.S. We've worked with our friends in Europe, and I'm sure Kasia will get more in depth on the DSA and DMA that are making their way through Brussels. But really exciting to see how comprehensive those proposals are in tackling some of the fundamental harms here. And we're seeing progress at the state level as well. Just yesterday the Age-Appropriate Design Code in California advanced. And so I think even with the level of fluency that members of Congress are talking about these issues compared to where they were a few years ago, the progress has been really significant. And so there's certainly an enormous amount of work to do, but I do think we're making progress and I'm very grateful for everyone on this panel, yourself included, for helping to drive that.

Nathalie Maréchal:

Thank you, Jesse. So Kasia, before we get into the details of the DMA and DSA, how does what Jesse said compare with your experience in Poland and Europe more broadly? Can you reflect a little bit on where our movement has taken us so far?

Katarzyna Szymielewicz:

Yes, I will do my best in the short time we have. Truly, I feel we live in interesting times for regulating Big Tech. Last five years in Europe we have witnessed increasing political support for deep reform. If you go back to what we heard from European Commission leaders like Breton, Ursula van der Leyen at the beginning of their term, they clearly attacked the very business model behind Big Tech. Engagement-based business model, advertising technology, all that has been clearly set as a target for regulation. The Big Tech itself has been seen as a risk and something that EU not only should react to with a number of pro-competition, pro-consumer cases, but even preempt with proactive regulation. So that movement supported by whistleblowing, supported by cases like Cambridge Analytica– we still remember that, right? We have had new cases since then, but that affair with Cambridge Analytica has been, I think, pretty influential here in Europe in informing the political agenda.

So on one hand, we have seen incredible movement of policymakers towards critical agenda. On the other hand, if you look at the goals set for the reform, we are witnessing today the DMA and DSA together as a package. Well, there are obviously two legs. One is people's empowerment through the new tools and safeguards, and I will go back to this, how much worth that is. On the other hand, there is always the economic liberal narrative present, and no surprise: the deeper we go in the reform, the longer it takes. It took two years to work out details, the bigger the impact of the market logic. So it's sad, but it's also realistic to say that after two years into making that, regulation has been, to a great extent, influenced by Big Techs lobbying. And the most revolutionary aspects of it, the biggest promises, have not been implemented at the end.

Nathalie Maréchal:

So the EU is obviously much further ahead than the U.S. in regulating Big Tech, having recently finalized both the Digital Markets Act and the Digital Services act, though we won't see the final text of the DSA for a bit longer, and maybe you can help us understand why that is? Because I know that for a lot of our audience and for me policy making in Europe is a bit of a mystery. So maybe you can help us unpack that a little bit. Can you tell us about these two pieces of legislation and how they change the fight for Big Tech accountability? And maybe give us a short preview of what's coming next in Brussels?

Katarzyna Szymielewicz:

Yeah, truly it is complicated. Honestly, we as civil society lobbyists only after being in that process, we understand what's really going on there. Very long story short, there are three key bodies involved in the process: the European Commission responsible for proposing the reform, the European Parliament which is usually seen as the most progressive body, at least for the sake of broad representation of various societal concepts on how to regulate. So we always have the left and the middle and the right, with the middle being the strongest voice. So Christian Democrats are still dictating more or less the mainstream. And we have the Council, which is the representation of governments. And again here, the whole variety of opinions, positions with pending politics being incredibly important. Needless to say that conflict in Ukraine has opened certain gateways that seemed closed, and closed the other problems that were important a year ago.

So this is all pretty dynamic. We have to observe that and we know... Not always we can, because part of this process is extremely non-transparent. Everything that happens in the Council and in the trilogue, the trilogue is the moment where the three come together to negotiate the final shape of the legislation. These meetings are mainly technical meetings where only experts sit and they are not expected to leak out information, which usually happens. So we can predict what will be in the final legislation, but officially, we have to wait a month longer, maybe three weeks longer, after the end of official negotiations to see the final text, after the technical people sit down and basically type, put on paper, what has been discussed behind closed doors.

So whatever we say now these days is based on leaks, is based on assurances that we received from various stakeholders being more or less public about the process. Ironically, the negotiating, the meeting where they negotiated lasted until 2:00 AM, well past midnight. But around 12:00 o'clock in the morning, Commissioner Breton already published on Twitter the whole stream explaining what has been won. So the lack of transparency does not prevent PR from happening as usual. So this is how it goes. People comment on this reform without really seeing the text yet.

Nathalie Maréchal:

Okay. So maybe it's best to hold off on a deep analysis until we actually see the text then. So what about the U.S.? Chris, what's the state of play here and what can civil society do to pressure policymakers on this front? Might we actually achieve some degree of tech accountability through legislation or regulation in the U.S. this year? What do you think?

Chris Lewis:

I think it's challenging. And congratulations Nathalie and Jessica on the latest report, it's fantastic work. I'm optimistic in the long run, I'm pessimistic in the short run. And I think we're further behind, as you noted, we're further behind Europe in really understanding where we want to go with accountability regulation in the U.S. And so I think we need to pick up the pace. Unfortunately, what we've seen in the United States and in Washington so far is a lot of focus on small one-off fixes to specific things that legislators have seen in the news. Some of these we support at Public Knowledge, and I know others here also support. But what we aren't seeing is a real framework approach like we're seeing in Europe. And in the long run, that's where I think we need to be.

And so hopefully what we'll get out of some of the one-off proposals that we're seeing around privacy and around competition policy and antitrust and algorithmic oversight, hopefully that will form the basis of how people understand what accountability should look like. And we can move towards more of a framework approach that we're seeing in Europe, that is my hope. There's some real challenges that we face. Unfortunately, I think some of the biggest challenges we face in the United States are really political and ideological. Given the atmosphere in Washington these days, the ideological divide means that it's very difficult for folks to agree on things and that's seeping into a lot of the debate. So, for example, in the United States, we know that we will face, in tech accountability, the challenge of making accountability work with the First Amendment and First Amendment protections.

But unfortunately we've also at the same time, over the last few years, seen a real breakdown of the consensus in the United States of what the First Amendment means, what free speech and free expression protections are and should be. And that a lot of that comes out of broader political fights that are really not related to tech policy per se, but unfortunately it's impacting where people see, how people view harms online and what solutions they'd like to see.

We also run the challenge because so many of the companies that we want accountability around are based on the United States. There's national pride involved. And so often when proposals are put forward, you'll hear folks say, "Oh, we can't do that because it will hurt the U.S., or it will hurt us companies and our competitiveness broadly."

I think that's very shortsighted and hopefully we can, as civil society, build back our consensus on what the First Amendment and what free expression is and should be. Also build consensus on what basic understanding of accountability should look like. That's difficult, but it's really the challenge in front of us to bridge some of the ideological divides that we're seeing in our country right now, to build a conventional wisdom around some of those ideas.

If we can do that, then I think we can get to more of a framework approach. And in the meantime, I think we're going to see a few smaller bills go forward. Things that promote competition, bills around self-preferencing and non-discrimination bills, hopefully around platforms like the App Store or broader interoperability. There's probably going to be a push this summer around privacy, but whether parties can come together and agree on what those enforcement structures look like, is unclear yet. So we have a lot of work to do. And I think civil society, we have a lot of work to help folks who may differ ideologically on various issues realize that they have the same issue at stake here. The right to differently express themselves and to have safe communications online, even when they disagree with each other. So that's the real challenge in front of us as civil society.

Nathalie Maréchal:

Thanks, Chris. And I'm glad to hear you say that you're optimistic, at least in the long run. I have a lot of conversations with people who are very pessimistic in all runs of time. And for me, I personally think it's important as activists to choose to be optimistic, because if you don't have that optimism that you can win, you lose the will to fight, right? And I think that's the most dangerous thing for any social movement, is to give up the idea that you can win before you've even tried.

Chris Lewis:

Well said.

Nathalie Maréchal:

Thank you. Shifting gears a little bit away from civil society advocates into different kinds of change makers. Sophie, I'm particularly glad that you could join us today for this conversation because you're one of the very few people out there who has worked inside a Big Tech company, has left, and can speak openly because you're not bound by a non-disparagement agreement since you turned down a pretty hefty severance package from Facebook. Tell us why you decided to speak out about your former employer?

Sophie Zhang:

Thank you, Nathalie. So just to be clear, I am bound by one non-disparagement agreement that I signed when I joined Facebook. I refused one when I left, so I would be breaking one rather than two. Ultimately Facebook hasn't sued me because it would look terrible for them. And also admit that everything that I'm saying is true. I'm protected by that rather than me turning down the money.

So anyways, I worked at Facebook for two and a half years. In my time there, I caught two national governments red-handed that were breaking Facebook's policies on vast scales to set up fake personas purporting to be their own citizens to mistreat, harass, and otherwise repress their fellow citizens. These were very clear-cut cases in which there was absolutely no moral nuance. No one was defending these cases on their merits.

In other cases you can say there are real questions at stake. What is the right decision here? Do we know for certain? But none of those were the case here. And Facebook still took almost a year to act in the case of Honduras, more than a year to act in the case of Azerbaijan. And ultimately I also knew that I was doing, this was only my spare time, this wasn't my actual job. No, this was no one's actual job. I had no special training in this area. I'm certainly not a super genius. And the reason I, that some random person out of grad school at her second job, was able to catch two national governments, red-handed with no training and no expertise and not being a genius. It's simply that they were low-hanging fruit, no one had bothered to look at them before, so they could be lazy.

Ultimately you can't fix a solution without knowing that it... You can't fix a problem until it you know exists in the first place. And right now, on many issues, only Facebook knows precisely what is going on within Facebook the platform. I don't think it will be a surprise to anyone to say that Facebook is a company, its goal is to make money. And at the end of the day, we don't expect Phillip Morris to have a division that tries to make cigarettes that's addictive or Phillip Morris to have a division that reimburses Medicare every time someone gets lung cancer. The very idea is a bit ludicrous. Imagine a world in which Phillip Morris knows that cigarettes give people cancer but Phillip Morris is the only person who knows, and Phillip Morris is the only group that has any chance of finding that out. In that situation I think it would be very important for someone from within the company to come forward. And so that's precisely what I did, and I'm still doing today.

Nathalie Maréchal:

Well, thank you for your whistleblowing and for your activism, Sophie. One thing you said to me when we talked last month that I thought was really interesting, and that I'd love to hear you talk about some more today, is that when you brought these concerns to your managers, they used the rhetoric of users' rights to resist taking action against these people who were using, including government officials, who were using the platform to hurt other people in various ways. And it's true that in the early years of the digital rights movement, we were really focused on protecting free expression and privacy for platform users and perhaps not thinking enough beyond that. Though I think by this point the conversation has caught up to that. What kind of messages would you like to see, do you think it's important to see from civil society groups, for us to be sending to companies and to policymakers? What should we be asking for?

Sophie Zhang:

Absolutely. So just to, again, first to provide context. So when I brought these cases up to leadership at Facebook, often there were concerns about taking precipitous action without warning people first because in terms... Fundamentally users' rights, it's about protecting users from the platform. But that can become a problem when users themselves are the platform. I mean, both are valuable initiatives in the same way that for instance, police advocates, and police reform advocates, are both valuable initiatives that are naturally at odds. And giving suspects more warning before arresting them such as the Miranda rights, has reduced false confessions and helped protect people from the police, but they have also made it harder for the police to catch people.

And that's the analogy that I'm going to use very broadly here. An additional facet is that at Facebook, the people who judge cases, the policy staffers, are the same people who are charged with also lobbying governments and political officials, and essentially getting them on their good side. Which is a very different paradigm from that in law enforcement, et cetera. In the United States, if a judge would call upon a trial case and they turned out that they went for weekly lunches with the defendant, they would be required to recuse themselves, I hope. At Facebook, it would only be a problem if they didn't know the defendant. I'm being a bit flippant but I think that gets my point across. And so Facebook had incentives to protect the input agent and influential from its own systems, and do the rhetoric of not taking precipitous action, giving people fair warning, et cetera.

And like you said, that goes back to the initial viewpoint accountability for tech platforms, that it was about accountability for the platform and protecting users from them. I've read the criteria that RDR uses and my understanding that most of it is focused on the platforms’ own transparency, about what measures it takes against users. What protections users have in terms of privacy, in terms of enforcement, et cetera. And so right now, it doesn't do much coverage of the other facet, which is protecting users from other users. Protecting users from violations of platform policy that are not being enforced or carried out, which I believe is equally important, because right now there is not much transparency or visibility into this.

So frankly, it's something that I believe would be a good idea for RDR and other similar transparency groups to do, would be to essentially do Red Team-style penetration tests. What I mean is for instance, I mean these would have to be done carefully because if you go at it alone, I'm sure Facebook will find an excuse to ban you or et cetera. But in principle, accepting those sorts of issues... If you want to know, for instance, how good each company is at taking down fake accounts, the best way to do it is to, in control, to test circumstances, set up your own fake accounts and see how many of them are actually taken down by each company. And then you could report afterwards. We set up a hundred networks of fake accounts on Facebook, Twitter, Reddit, TikTok, etcetera.

Facebook took down 10 out of 100. TikTok took down one out of 100. Twitter took down two out of 100. They're all terrible, but Facebook is at least terrible. I'm making up these numbers, obviously. The same approach could be used, for instance, if you're concerned about hate speech, you could do control circumstances, set up hate speech, see what percentage already is taken down. You could see responses to user reports. Create violating posts, have people report them and see how many of them are taken down.

Other people are concerned about social media overreach and taking down posts that aren't violating.You could choose the exact same approach, make posts that aren't violating, maybe a bit borderline and unclear, and report them. Perhaps report similar posts on different sides of the political spectrum, so that if you're worried about political bias, and see how many of them are taken down incorrectly. People have done experiments and there's anecdotal discussion of these sorts of issues, but I don't think there has been any systematic approach at it. And I think that would be extraordinarily valuable, because right now a lot of people are talking past each other based on anecdotal evidence. And when you have two billion users on a platform, there will be anecdotal evidence for anything.

Nathalie Maréchal:

Those are some really great suggestions for going beyond our RDR's current research methods and approach. Obviously the kind of indicator based research on publicly available documents is far from the only research method out there. And our team is very much thinking about how we can expand our current arsenal of research tools. And I hope we can continue talking about this in the weeks and months to come, Sophie.

Now Sarah, you come from a different perspective of Sophie's, working as an investor and an advocate. And of course, one of the themes we highlighted in the Scorecard is the growing role that investors are playing in tech accountability. So Sarah, my question for you is, what's the business case for investors? Why do investors care about human rights in the tech sector, and what strategies can they use to hold companies accountable? And what strategies have you specifically used to this end?

Sarah Couturier-Tanoh:

Thank you Nathalie. It’s true that when you put these two terms, investors and human rights in the same sentence, the general public often raises its brow because most people don't really see investors as allies in the fight for human rights or democracy in general. Investors, because they are the owners of the companies in which they invest, they are in a unique position to push companies in certain directions. They can leverage share ownership and powers such as their putting rights, for example, to do that. And that's exactly what we do at SHARE. We help investors steward their assets in ways that contribute to positive social and environmental outcomes. So now, while it's true that the idea that profit should be the only externality investors should look for when taking investment decisions. This idea is very prevalent, sorry. That was my point. There is still a significant portion of investors, especially institutional investors that agree on the materiality of other types of externalities, including social, societal and environmental outcomes.

And while a few investors would base that assessment on moral values and ethics, most investors believe that social or environmental impacts represent risk for the companies and sometimes the economies and societies, and therefore this risk should be managed.

If I take the example of human rights in the tech sector, I must say that it is a fairly new area for most investors. And we see this risk as emerging, and there is a growing understanding that we need to pay attention to the way some companies, because of their outsized influence, society may impact human rights and democracies like Meta Platforms for instance, or is that also the case for companies that rely on the collection and exploitation of personal data, including facial recognition, like Google, for example.

So, I can take two examples to illustrate what investors can do to support the fight for human rights in the tech sector. So, the first example is about Meta Platforms. It is clear that this campaign has a human rights problem, and there are existing human rights risks and probably new risks to come with the development of the Metaverse for instance. And that's the reason why we co-filed a shareholder proposal with other investors, including Arjuna Capital, calling on the company to conduct a human rights impact assessment on the Metaverse.

So this proposal will be voted on at the next AGM. And basically the rule is that if a majority of investors vote in favor of this proposal, good practice is that the company should implement the proposal. Now, Meta Platforms is a bit strange because, as investors who believe that human rights risks are amplified by a company's structure that concentrates most of the power into Mark Zuckerberg's hands because he has a double function of CEO and chairman. And this means that there is no real check and balance within the company. And this is essential in every company to ensure that the management takes appropriate decisions and that the board serves the best interest of shareholders. In Meta Platform's case, shareholders' voices are not heard. The management and the board have failed at many, many, many occasions to address shareholders’ concerns, especially on human rights and governance matters. Including when shareholders have, in majority, bullied for some shareholder proposals. So two months ago, approximately we convened a group of 15 investors including SHARE that collectively represent $2.7 trillion of assets in management.

And we worked with the company and we asked that they implement certain governance reforms that would strengthen shareholders rights and not to nominate Peggy Alford for it and Marc Andreesen as board members, and nominate two truly independent directors instead. So the company ignored our call for, so the next logical step for us was to recommend every shareholder to vote against the two directors, to send a clear signal to the board and the management that we need change. And that change needs to happen now. So Meta Platforms’ AGM will be at the end of the month. So we'll see the result of this vote, usually we consider that this kind of vote is good when more than 10 to 15% of shareholders put it against directors. I have another example with Google, but I'm not sure I have time to do that. Do I have time?

Yeah, I do. Okay. So I like to use the other example of Alphabet. So with the support of the Ranking Digital Rights team. We designed and filed a shareholder proposal asking the company to conduct what we call a human rights impact assessment to identify and address potential human rights risks that would be created by Alphabet's new advertising system called FLoC. The company canceled the implementation of the FLoC and decided to implement instead another advertising system called Topics API. We had a call with members of the leadership team of Alphabet, and they said that they canceled the FLoC because of negative feedback they received from civil society actors, experts, and also investors linked to this proposal. So this kind of proposals and communications between shareholders and companies really helped to amplify the voice of civil society actors.

So we agreed to withdraw the proposal, and in exchange the company agreed to commit to meet with us twice between now and October, and to include in those conversations members of the Ranking Digital Rights team. And we hope that with the presence of members of, I mean, this expert that, we’ll be able to move the needle. So I think that what we're doing with Meta Platforms or even Alphabet really illustrates well some tools we have as investors to move the needle and support civil society organizations to push for better human rights in the tech sector. And we know that our impact is meaningful, but modest, but I believe that in these circumstances all hands on deck is a necessary approach and investors should play their parts.

Nathalie Maréchal:

Thank you, Sarah. I am going to start with an audience question for Sophie. Sophie, why do you think America focused more on the whistleblowing from Frances Haugen on what she found versus what you identified given that you blew the whistle first. Why do you think her whistleblowing had more take up with the public discourse in the U.S.?

Sophie Zhang:

I'm not a public relations expert, so it's just personal speculation. Of course not an expert. My guess is that it's a combination of factors. First, that Frances spoke to issues that were more broadly interesting and intriguing to Americans. Such as for instance, teen mental health crisis, which, I mean, I think that it's more relatable to most Americans than abuse of Facebook by dictators in Honduras or Azerbaijan. Even when they came forward about decisions made in the United States, that was mostly a side show, which did not get much pickup. The second aspect they point to is that frankly, I was probably pretty naive when they came forward. I thought I would just go out there, talk to everyone and they would just could decide on their own, whether they listen to me or not. Frances took a more proactive approach of getting PR support and et cetera, which frankly was a lot more effective than what I then did, which is why PR people get paid in the first place.

I suppose. I mean, right now it's a bit too late for me and being essentially unscripted and doing everything myself is essentially my brand now. So, as I'm running with it, I do find it a bit funny how some people criticized Frances for being too prepared and poised and scripted. And then they turn around and look at me and say, you can't trust her. She stutters, she has an accent. She's not prepared enough. I mean, ultimately some people will criticize the messenger when, what they dislike is the message itself.

Nathalie Maréchal:

Yep. I think that's absolutely on point. One thing I want to make sure that we really do talk about is Russia's invasion of Ukraine. And obviously Big Tech is not responsible for Putin's regime and the long history of Russian imperialism, that's not something we're going to pin on Big Tech platforms. But, they are nevertheless implicated in how this conflict is playing out and how this invasion and brutal occupation is playing out and Kasia I know you're quite close to the situation being in Poland and being active in Eastern European activist networks. What can we learn today? What can we learn about how Big Tech companies operate today from their recent actions in Ukraine, but also in Belarus, Russia and the broader region, and how should that influence our collective advocacy agenda?

Katarzyna Szymielewicz:

I'm happy to say that in terms of our agenda, the civic society agenda, including certainly what Ranking Digital Rights has been saying for ages, we do not need to correct anything. We have been saying this from the very beginning of that conversation. I think that the business model is the problem and the business model needs to change. The problem is policymakers, even when they say they're ready to regulate, as they have said in the EU, and they have declared war against Big Tech's abuses, they're still not exactly ready to attack the core of the business model, which is based on people's engagement, is based on exploiting users' attention, is based on making money from behavioral observations. If we don't attack that we will not change the machine behind this war on information that has escalated nowadays in my part of Europe.

So, it wasn't good news for our movement when we have seen that in the first weeks of war, everybody, including government has basically targeted Big Tech as the solution, asking them to clean certain disinformation agents from the internet to block certain accounts, to block certain people or Russian agencies from speaking publicly.

As if it was a way to solve the problem while we all know that the solution is much, much deeper in the engine of these platforms. So speaking to that problem, I can only quickly indicate what we are hoping for in the DSA that might prove to some extent useful in solving that problem, but not radical enough. First thing, which is also very interesting in the context of what has been said today, we will have much more robust risk assessment mechanisms in the DSA. Meaning that platforms themselves will be expected by the regulator to self-assess risks caused by their business model, including the way they target ads, including social media algorithms and impact of these algorithms and their moderation practices and their targeting mechanisms on democracy, public health, cybersecurity, everything that matters. If they do these risk assessments right, we will no longer need whistle blowing.

Obviously it's just a joke. I know they will not do that well enough because they have no interest to do that well enough, but at the same time, we have a European Commission invested in strong enforcement measures, able to force a better risk assessment. And more interestingly for us here, we have new rights for civic society and other independent experts, including so-called vetted researchers, to demand access to data about all these mechanisms that operate inside of large platforms. So, hopefully we will be able to question risk assessment when they are not done properly and demand real data about how, for example, social media recommender systems or targeting algorithms operate. What type of data they take into account, what type of optimization targets the Big Tech users and all that. So hopefully this is a foot in the door for us in Europe, and hopefully globally as well, to demand more accountability.

Finally, again, not radical enough, but an interesting measure, there will be limitations on how Big Tech can target people in Europe. We wanted to prohibit essentially the use of observed data about humans, because we believe that hardly ever people would authorize behavioral observations to be used against them, to manipulate them with the use of sponsored content in general. Unfortunately, that proves too radical in the debate we had in Brussels. But what we won is a partial ban on the use of sensitive data, including observed sensitive data and any data about children. So again, not radical, far from what we wanted, but a foot in the door of changing the most toxic aspects of that business model.

Nathalie Maréchal:

Right, and of course you're referring to a ban on surveillance advertising, which is something I know that Accountable Tech and Ranking Digital Rights both support. Now here's a question from the audience. And I think either Chris or Jesse could take it. Would you have a sense of how American lawmakers are viewing the deeper European reforms that Kasia was just talking about? And what would it take to get U.S. lawmakers to move in that direction here. Question for either of you and if the other one wants to build on what the first one says, please go for it.

Jesse Lehrich:

We're trying to do some education around the DSA and DMA right now, because I think frankly, a lot of lawmakers' reaction in the US to the DSA and DMA is not really knowing what it is. And I think Chris alluded earlier to sort of like the gut reaction, reflexive opposition, that I think we still sort of have here in the U.S. when, especially regulation is sort of sacrosanct here as it is. But certainly when the Europeans are regulating our great American companies, I think there's sort of an antiquated sentiment from Washington that it's their role to jump to the defense of Big Tech's bottom line. But I think one of the interesting things, and I'm happy to circulate this to the community afterwards, but we've put together actually a memo that really runs through.

One of the things I find most interesting is that the DSA and DMA really, to me, it reads like an omnibus package of some of the best pieces of legislation that are before Congress. So today the Senate is marking up the Platform Accountability and Transparency Act which would enshrine a lot of similar transparency mechanisms that are included in the Digital Services Act. That's a bipartisan bill Senator Portman is supporting along with Senator Klobuchar. Risk assessments are sort of, and independent auditing are sort of central to the bipartisan Kids Online Safety Act that Senators Blumenthal and Blackburn have introduced. And, the DMA, I won't run through the full litany of everything. But the DMA shares a lot of qualities with the antitrust bill that Chris alluded to earlier, which takes direct aim at self-preferencing and other anti-competitive abuses in the digital market.

I wish that we were where we were further along and that we had more of a, as Chris said, a framework where sort of a sweeping, all of the above approach that really takes a comprehensive look at digital markets and how we need to rewrite the rules. But the other point that I make to folks on the Hill is that if we don't make the rules, the rules are going to get rewritten without us. So, I think I hope that if nothing else, a major impetus for Congress to get their act together and finally push some legislation across the finish line after years of talking about it.

Chris Lewis:

That was well put Jesse, I'll just add that for better or for worse. We may need to advocate for, and get our policy makers in Washington to start to build on the studies that have been done in Europe that we've had, an investigation in the House of Representatives here in Washington that was excellent looking at some of the harms, purely competition harms around tech accountability, but there's much more work that we need to do to build on that, to look beyond competition harms, to actual consumer harms and other threats. So, I'm encouraged, when I said before, I'm not encouraging in the short run, but encouraged in the long run. The hope is that as policymakers learn the details of what's happening in Europe, that they can see that many of the harms that they're concerned about with the tech sector are being looked at, and that they'll hopefully find interest in finding American-style approaches to addressing those challenges.

We're already seeing the Federal Trade Commission, for example, starting and proceeding, to look at surveillance advertising and whether or not there could be a ban, should be a ban, something short of a ban.

These sorts of analyses and studies are important. This is why we've called for years to have an expert digital regulator for tech platforms, because we're just not seeing Congress keeping up with the pace of technological change and changes in the marketplace. And so while there is increasing interest, I would hope that,the work at the FTC or the empowerment of an expert regulator could go a long way to creating the sort of trust in the analysis of the marketplace that our policymakers in the U.S. will trust rather than feeling that there's somehow a threat from the European analysis to American companies. When we hear American legislators talk about digital harms, they're often the same ones they're seeing in Europe, but then somehow this protectionism comes about, and we just have to find our way around that.

Katarzyna Szymielewicz:

Just follow up to what Chris just said, it would be really extremely helpful for the debate we have in Europe to gather more evidence from the industry of how alternative, more ethical business models play out in practice.

We feel here in Europe, that there is this Stockholm syndrome we observe, especially with electronic media, who for ages have been critical of what the Big Tech's business model demands from them, driving the quality of journalism down and making chronic media more and more economic dependent on click-bait,on the sensational and emotional content. Everything we rightfully criticize, especially in the types of information war, but at the same time, nobody seems to believe that the alternative economic alternative is viable, that we could move to contextual ads or profiling people based on their consent. As if there was no economic evidence to back these claims, it's very difficult for us, the civil society, to come up to industry and say, "Hey, guys, we know better. We will now tell you how you do your business". So it's more likely that we just say, what are the red lines on the civil society side? What are the safeguards? What are the prohibitions that we want the business to observe? It is ethical and correct when we say so, but not extremely efficient, if you want to convince policy makers to say, "Yeah, we are ready to execute the ban."

So any reliable evidence coming from the U.S. backing that discussion against surveillance advertising would be extremely useful.

Nathalie Maréchal:

Another really serious development that's on my mind is the news based on a Supreme Court leak a few days ago that the U.S. Supreme Court appears poised to overturn Roe versus Wade with really severe consequences. Not only for the right to abortion but for reproductive rights, for a whole host of individual rights and liberties that the court has recognized on the basis of the same right to privacy that underpins Roe vs. Wade.

As with all questions of rights, there are implications for Big Tech and Big Tech accountability. And unfortunately, this is another area where we can look to Europe for lessons learned and experience. And Katarzyna, I know that you and your organization have done a great deal of work around reproductive rights and the right to information and privacy online in that context. What advice do you have, or what lessons learned can you share with American civil society groups and individuals in this context as we contemplate the possibility of Roe being overturned?

Katarzyna Szymielewicz:

Well, I guess it all starts with informing the society of what is really at stake and preventing the debate from landing in extremes. The worst possible result, which we, unfortunately, observe in Poland is that both sides of the debate are using more and more radical arguments, and it's less and less evidence-based or simply more emotional. So the same problem we would be observing in elections, in the context of conflicts like the Ukrainian War. The same lack of possibility of meeting somewhere in the rational place to solve real problems, this is particularly troubling.

I would say, being very liberal myself when it comes to reproductive rights, I have to admit that there are usually societal problems hidden behind the other arguments. The other argumentation wouldn't exist in the society if there was no problem. So it's not just spin that we have to face from the other side. There are usually problems we need to understand while there is so little space in the debate for the two sides of the debate to meet and have honest conversation. So lack of this space, starting with social media, ending with the Parliament. I think this is the problem that needs to be tackled by civic society because we are the only ones who can create a forum for a more rational, less emotional debate about very complex societal challenges.

Nathalie Maréchal:

Thank you, Kasia. So another hot topic in the past couple weeks is, of course, Elon Musk's planned acquisition of Twitter. Sarah, from an investor perspective, what's your reaction to that? What does it look like, keeping a privately held Twitter accountable?

Sarah Couturier-Tanoh:

Well, first of all, the situation with Twitter and Elon Musk is very concerning from the human rights standpoint. And Musk has clearly stated his intention to limit content moderation as much as possible in the name of free speech. And this is very dangerous. We know what happens when people can say whatever they want without safeguards, and this interpretation of free speech can lead to an increase of hate speech and disinformation. And this would have a direct impact on public opinion and democracy in general, especially in the current circumstances we live in with the rise of extremisms and division. Now, the offer has been made, and Twitter accepted it. We should expect several things. So the first one is regulators' review of the transaction, but it is usually limited to competition and antitrust issues, which are unlikely at stake in this case.

And the second thing is shareholders' approval of the transaction, which would take the form of a vote. We thought that we would have this vote at the upcoming AGM on May 25, but it doesn't seem like it. So shareholders have the power to influence, to some extent, this transaction. In their evaluation, they will, of course, take into account important financial considerations but also other non-financial considerations as Musk’s takeover will likely have an important impact on the future of the company. So it is crucial for shareholders to pay attention to Elon Musk's plans for his company and how they would impact human rights. And if there will be no sufficient safeguards, it is very important for shareholders to propose that they cover it through their vote. Another thing to consider is that Elon Musk is considering taking the company private for three years to implement change without shareholder scrutiny.

So some would argue that it would make these changes more efficient because there wouldn't be shareholders to analyze, challenge and approve or disapprove of the company's plans. But we also could strongly argue that shareholders' ability to take an active part in Twitter's transformation would help the company to not lose sight of human right risks. So what I see here is an attempt to sever shareholders' rights as much as possible, for Elon Musk to do whatever he wants with the company and then have this fait accompli and it would be too late.

Nathalie Maréchal:

I want to give everybody a chance to sum up their takeaways from this conversation. I'd like every one of you to share what you need from your allies. This is a movement where we all have different roles to play, different strengths, different positionalities. And one thing that I'm hearing is that groups like Ranking Digital Rights, that really straddle the line between research and advocacy, that we need to do a better job surfacing the academic research and other types of civil society research into the public conversation.

Someone in the audience highlighted that it's not entirely true that only anecdotal evidence exists of platform harms,hough it is true that that's what hits the news and that there are academic publications, free software for collecting evidence on various types of harms in a systematic manner, but it doesn't make it into the news. It doesn't make it into policy conversation. And I think that's something that RDR as well as others can do a better job of being the pipeline that gets that knowledge from the academy to the public policy conversation. So I'll go in reverse alphabetical order for once. So starting with Sophie, what do you need? What are either your takeaways from this conversation or something that you need from your allies in the movement to play your own role better?

Sophie Zhang:

I think that something that would be helpful is just increasing general understanding of the situation and the different dynamics at play because there are a lot of different subjects that get dumped together under the umbrella of digital rights or tech accountability, or etcetera. It includes everything from user rights, transparency on terms of service, to privacy protections, to issues like hate speech and misinformation to issues like inauthentic accounts, which is what I personally worked on, and many others. And oftentimes, when people think of it, invite me to panels or presentations or talks, they have completely the wrong idea of what I work on. And they give me a prompt on something like, “based on your expertise working on artificial intelligence…” And I'm like, "No, I did not work on artificial intelligence at all," or et cetera or misinformation or hate speech or et cetera.

You have to understand the problem before you can solve it in the first place. And there are a lot of different problems that are put together under the same umbrella currently, that are actually, in many ways, very different problems that have different solutions. Many people have suggested breaking up Facebook. That is a solution that solves exactly one problem, which is that social media companies are too powerful. It doesn't do anything to address others. So just build understanding, it's my conclusion. That's what's needed. Anyways.

Nathalie Maréchal:

Great. And to clarify, I think what you meant was that breaking up Meta is not a silver bullet. It would solve the problem of too much power, but we would still have many other problems that we would need to use different solutions to address. Great. Katarzyna, what's your takeaway or your ask?

Katarzyna Szymielewicz:

Thank you for a super interesting debate. I would say two things in terms of pursuing our mission and having more evidence to say what we want to say to policymakers, never enough evidence of social harm. More than individual harms. Individual harms are super difficult to document and also not very convincing in the times where people get killed. And we have a huge storm coming up also here in Europe. Maybe societal harms are the only ones that can speak to policymakers. So more documentation of that. We are even preparing for one project with Global Witness documenting how Newsfeed on Facebook, the way it is moderated, pushes this information more up the feed, a simple thing. But again, we need to keep documenting that. So the more sources, the more evidence proving this issue is connected to how social media work with their engines will be extremely useful.

The other terrain where we need more evidence is proving that alternative internet, alternative business models are possible. So everything that can prove our concepts, that something else, something more healthy, something more sustainable, more privacy-preserving is possible, exists somewhere, and is also economically viable, would be incredible. Speaking of breaking up Facebook, we also have been against that claim for a long time. We try to push for a modular separation or separating layers of something like Facebook to enable competition within each and every layer, including algorithms and interfaces. I still believe it's an excellent idea, but people simply don't understand that. So whatever we can do, especially coming from the business side to prove or explain these concepts in practice would be incredibly helpful to push that debate beyond just complaining.

Nathalie Maréchal:

Chris?

Chris Lewis:

There's so much work to do! Just to pick up where Kasia left off. To move beyond complaining, I think, is really important with the public. And so I agree with your point about making studies available, helping the public understand that there are solutions out here. I feel like a lot of the public feels powerless in a world where they don't trust government right now. And the real options for who to empower are limited. It's either government, the companies, or the public, and empowering one or having the other completely unempowered, I think, loses with a power imbalance that exacerbates problems of disinformation.

So we have a lot of work to do to help the public understand that there's a role for them, there's a role for the government, hopefully, democratic governance, that can also empower them. And that there's a role for setting expectations on platforms to use their power in a way that meets public expectations. So we have a lot of work to do to get folks talking together. And those conversations are also, hopefully, in the U.S. context, going to help bridge some of the ideological divides that we have because we simply have folks who are living in different information bubbles. And so, to break through, that is a challenge that civil society has to take on.

Nathalie Maréchal:

Definitely. Jesse?

Jesse Lehrich:

Yeah. I think, just to continue building on what Chris was just saying, I think we all have work to do in terms of continuing these dialogues outside of our own echo chambers, not just on social media, but we have a tendency in the advocacy world to talk to ourselves. And it might feel good, or it might be a fun way to spend time sitting around and debating these things with people who agree with us, but it's not a good way to make progress. And I think, in particular, especially being a straight White man, I've been in a lot of rooms where there's so many people, especially on this issue. I know this is pervasive across society, but especially on tech issues, where the whole room looks like me, and we're sitting around talking about how to protect people from online voter suppression.

And I think until we do the work to make sure that we're doing the outreach, doing the education, doing the coalition-building, that is really necessary to make progress, but not only to make progress but to get it right because, at the end of the day, the people that are bearing the brunt of all of the harms that we're talking about, all the time, they're not me. They're communities of color. They're the people of Honduras and Azerbaijan where Facebook hasn't invested any resources. They're LGBTQ communities. And so we need to do a better job of getting outside of that tech policy bubble and figuring out how to both bring people to the table to help have their voice as we make those decisions and to communicate to the broader public as Chris was saying. Because it's going to take all of us to make progress and to make sure that progress is equitable and advances the things that we care about deeply and not just more of the status quo.

Nathalie Maréchal:

Yeah. I couldn't agree more. When I first started working in this field, about 10 years ago, there was very clearly, there's human rights, and there's digital rights online. And there were tech issues and all other issues. And that line was always kind of flimsy and not entirely grounded in reality. Now it's completely gone. And there are still people who think it's there. And I think we need to really educate people that it's not about rights online. It's about rights. The problem with harmful speech is not that it exists on the internet. And yeah, I hear a lot of people, including in Congress, act like the real problem is that there's images of child abuse on the internet. No. That's a manifestation of the real problem, which is that children are being abused, and you can extrapolate that to any issue that we're concerned about here.

And so I think it's, as you're saying, it's really important to break down these barriers, and to communicate, to work hand in hand with the reproductive rights movement, with the environmental movement, voting rights movement, immigrants rights, I mean, the LGBT rights… I'm not going to try to list all the groups that we're concerned about here because we'd be here all day, but I couldn't agree more. Sarah, as an investor, what kind of help can civil-society groups or whistleblowers or other types of actors in our movement do to help you and others like you hold companies accountable using your power as investors?

Sarah Couturier-Tanoh:

Yeah, sure. So as I said, I think investors have a lot of power because they can directly influence companies' behavior and decisions, but we wouldn't be able to do that appropriately without the help, the support of civil-society actors and academics. We're not experts. We cannot be experts in anything. And we are here to listen, to understand, and to facilitate that conversation between the civil-society actors and economic actors. And the Ranking Digital team has been instrumental in the filing of several shareholder proposals this year on civil issues. And thanks to that sort of, I don't know, dynamic, we were able to bring to companies' management and boards some human rights issues that I think would have taken more time for them to address. So I guess this is just my way to say thank you and let's keep that conversation going.

Nathalie Maréchal:

Well, you're most welcome, Sarah. You and all the other investors have been really tremendous partners for us, especially over the past year. I'm conscious of time, and I want to be respectful of everyone's time. So I want to end by thanking all of our amazing panelists. It's always been a joy every time I've spoken to each of you individually, and then having you all together as a group has been a real treat. Thanks.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & ...

Topics