Home

Donate

Evaluating the First Systemic Risk and Audit Reports Under the Digital Services Act

Ramsha Jahangir / Feb 23, 2025

Audio of this conversation is available via your favorite podcast service.

In this podcast episode, we present a roundtable discussion on what's to be learned from the first systemic risk assessments and independent audit reports from Very Large Online Platforms and Search Engines produced in compliance with the European Union's Digital Services Act. For this discussion, I'm joined by:

  • Hilary Ross, program lead at the Global Network Initiative (GNI);
  • Magdalena Jozwiak, associate researcher at the DSA Observatory; and
  • Svea Windwehr, the assistant director of EU policy at the Electronic Frontier Foundation (EFF).

What follows is a lightly edited transcript of the discussion.

Ramsha Jahangir:

I'm Ramsha Jahangir, Associate Editor at Tech Policy Press. At Tech Policy Press, we've been closely following the implementation of Digital Services Act, the European Union law designed to regulate online platforms and services. Last November, 19 of the biggest online platforms and search engines took a major step under the DSA, publishing their first systemic risk assessments and independent audit reports. These reports detail how their services connect to various risks and the platform's efforts to manage them. Despite a growing conflict between American and European officials over the implementation of the EU's tech laws, these reports—thousands of pages long—haven't received a lot of public attention. To help us understand how these reports are meeting expectations, I'm joined by a fantastic all-women panel today, which, if my research is correct, may be a first for this podcast. Let's start with the round of introductions.

Hilary Ross:

My name is Hilary Ross and I'm the program lead at the Global Network Initiative or GNI. So we're a non-profit, multi-stakeholder organization, which means we have over a hundred members, and we have members globally from across four sectors: the tech industry, civil society investors, and academia. And GNI sets a standard based on international human rights principles for how technology companies should respect their users', freedom of expression and privacy rights. And in particular, we focus on how companies can respond in rights-respecting ways when governments make over-broad demands or requests from companies in ways that contravene international human rights principles.

In terms of thinking about the DSA, as governments have gotten serious about regulating the tech sector, it has encouraged them to require the kinds of steps that we've long encouraged companies to undertake voluntarily.

So that is things like having clear internal human rights policies, engaging stakeholders, doing human rights, due diligence, transparency reporting, et cetera. In terms of the DSA, GNI had some reservations about the DSA, but we publicly welcomed the final text as a democratic regulation that we see mostly aligned with our framework and broader approaches set out in the UN Guiding Principles on Business and Human Rights. But from our experience, we know that ensuring that regulation really respects users' rights depends on implementation. So that's what we're focused on engaging with right now. And I'm looking forward to talking with you all about how the risk assessments are actually being implemented.

Magdalena Jozwiak:

My name is Magdalena Józiak and I am an associate researcher at the DSA Observatory, which is a part of a research institute, the Institute for Information Law at the University of Amsterdam. In my work at the DSA Observatory, I focus mainly on the topics of systemic risks within the DSA, but part of our team is also working on the issues connected to data access and the human rights framework indicated by the DSA. So we cover various topics around the DSA. Yeah, very much focusing on the topic of today's conversation. And thanks for the invite and happy to be here.

Svea Windwehr:

I'm Svea Windwehr, the assistant director of EU policy at the Electronic Frontier Foundation (EFF), where I focus on platform regulation, user rights surveillance, and the regulation of AI. So, quite a broad number of topics. Previously I worked at the Center for User Rights, which is a project to enforce user rights under a digital services act at a German strategic litigation NGO. And before that I've also worked at Google and European commissions. Yeah, I've been exposed to different perspectives on the topic of federal regulation. At EFF, we have been following and contributing to the DSA discussions for the past five years. I think we share some of the concerns that Hillary has outlined in particular when it comes to the systemic risk governance approach. We see some risk related to the poor definition of systemic risk and the potential overreach and politicization or overregulation of the DSA in that way. Yeah, I'm really excited to dive deeper into the topic, and I'll leave it that.

Ramsha Jahangir:

I'm really grateful for each of you on the line today and the reports we're discussing today will inform any future enforcement actions. So, let's dive in. Some of you have already mentioned your initial reactions to the first round of VLAP and VLAS risk assessment reports. But I'm going to pose this question again. In short, did they meet, exceed, or fall short of your expectations? And most importantly, are they genuinely contributing to meaningful transparency?

Svea Windwehr:

So, I think fundamentally, the risk reports and not necessarily supposed to contribute to meaningful transparency, I mainly see them as a compliance exercise that platforms have to go through. And from that perspective, I'm not necessarily disappointed by reports, but I also didn't have the highest expectations in that sense because I think fundamentally, this is, again, a compliance exercise that companies had to go through. So I think it's, to a certain extent, self-evident that the reports don't contain, let's say, a smoking gun pointing towards massive non-compliance or fundamental issues. But I think they do offer a starting point to be able to compare risk assessment and risk mitigation across different companies and to also push companies for more granularity, for more data, for better documentation. So I think in that sense, the gaps that we see with the reports, and I'm sure we'll cover many of them, I think they're quite helpful in pushing the conversation forward and giving us a starting point as to what we would like to see in the next iteration of the reports.

Hilary Ross:

I would say at a high level, I think generally these reports being published is an important step forward. I would say I do think they're a step forward towards enhanced transparency and using that transparency for more accountability from large services. I think on the positive, similarly to Svea, the reports help us understand how companies are seeing their risks, what methodologies they're using, and how compliance with the DSA is working internally. And I do think for more careful readers, they'll give a number of details that will be helpful both to understanding specific company's management of risks and how they're thinking about protecting rights as well. And also how the ecosystem more broadly is working and how we might understand the ecosystem as a whole. But I do think, as Svea said, that maybe the field more generally is coming to terms with the fact that these are compliance documents, and two, they're quite long, and they're complex and hard to parse. So, they might not be serving the set of purposes that across civil society, some hoped that they might serve.

Magdalena Jozwiak:

Perhaps I was a little bit disappointed, but it's all about managing their expectations. So, of course, it is first of a kind of exercise of some kind of transparency and some insight into platforms and ideas of risks they're creating, especially at the systemic level. So it did create some kind of momentum and expectations, maybe that finally, as we have this regulatory tool to force platforms to publish certain information, that will contribute to some more transparency. This is also perhaps those expectations or hopes were badly placed, but it was just the momentum also. But yeah, I think it's more than anything, this is a first step. I think now from now on, what is really important is the next step. How much data will be available as a next step when we have finally the Delegated Act on Article 40 of the ESA and the reports that are published can be a first step also for the researchers who want to access non-public data to orientate themselves in what they want to know further.

Also, it actually gives us some insight into the enforcement actions by the commission. And we have so far very little insight into what the Commission was looking at when they were asking for more information from platforms and starting their proceedings against platforms to now we actually have some more context to those actions by the Commission. So in that sense, it is I think, first step in this iterative process. And the next reports, I think they will somehow contribute a little bit more of, not transparency, but also they will provide a little bit more of this not smoking gun, but they will be a little bit more informative, I think. And that's really my hope that they will contribute a little bit more of building certain accountability for platforms.

Svea Windwehr:

I want to circle back to one point you just made Magdalena, which I think is really important, which is that the reports will help researchers know what to look for. And I think that's a really underappreciated dimension of the reports, also of the transparency reports overall under DSA. And I think one of the most interesting categories of data that researchers will hopefully get access to is exactly the kind of documentation that went into the risk assessment reports in the first place, which we don't really get from reports or from the audit reports, but I think that will be actually extremely helpful to get a more precise view of how platforms think, the risks internally, what kind of tests they did in order to test risks, what kind of documentation they produce essentially. So I think that's a really important point.

Ramsha Jahangir:

What of the things that have come up is the lack of reporting about the data metrics and methods used to evaluate risks. And do you think actually speaking to that, are these gaps in reporting indicative of a broader reluctance by platforms to fully disclose the inner workings of their systems? And how can regulators compel greater transparency in these crucial areas and also ensure that future reports actually do live up to these expectations?

Magdalena Jozwiak:

Yeah, I think the lack of the gaps that we see in reporting and perhaps the scarcity of data, there's definitely deliberate. And I think as long as platforms are not obliged to publish some data but avail findings in the reports in a very technocratic, very... The language that seems to give all of the sense of well-done homework. I think that's, of course, what platform that's understandable, that platforms chose to phrase the reports in this way, but that's now the very important next step will be to actually ask for data. And with Article 40.44, 40.12 DSA that hopefully that will be possible, and there are already, I'm sure, I know there are teams of researchers who are getting ready to make very first data access requests.

So just depending how well that will go and how much mobilization there will be, that could be a worthwhile exercise and contribute to much more meaningful findings. And the next thing I think also is a huge role here is for commission that can ask for information and has so far being quite diligent, asking, requesting information from the platforms. So yeah, those two active stakeholders I think have a tremendous role to play.

Hilary Ross:

I think the pieces that Magdalena and Svea noted on the importance of the reports as a tool to then ask for further data, for researchers to ask for further data is really critical. One thing that I think that will be particularly important around is there are concerns about the risks of the DSA kind of as an overall regulatory system incentivizing over-moderation in the long term. Right now, it's hard to know and tell if that will happen over time. It will be hard to assess if that's happening. And so I think that's also a place where data requests will be quite important. And companies did cite some data in their reports themselves around removals and appeals and prevalence, but it would be great to see more evidence backing up claims within the reports and have those assertions be able to be assessed by outside researchers. And that's again, to connect back why the risk assessment is connected to the researcher access to data piece of the regulatory framework. So that's one key piece.

Svea Windwehr:

To come back to your question Ramsha, whether regulators can compel greater transparency in the areas that you mentioned. I think, and there's probably something we'll come back to, but I think there is this open question of whether guidelines are needed, whether additional guidance from the commission is needed in terms of how risk assessments and risk mitigation is supposed to take place. And personally, I would find guidelines for risk mitigation actually a lot more useful. So to have clear guidance on how mitigation measures must be tested, documented, thresholds for successful mitigation, et cetera. But unfortunately, of course that is not foreseen. But at DSA there is not really an option, at least an obvious option to introduce guidelines for risk mitigation. But I think that is one of the options the commission still has left to potentially issue guidelines for the risk assessment bit of the process.

Ramsha Jahangir:

I'm going to bring this back to what we can glean from the existing reports. Can we get any insights from the risk assessment reports about the specific type of risks regulators are most concerned about?

Magdalena Jozwiak:

From the perspective of regulator, I think it's also important to look at the enforcement actions of the commission. Platforms do hint at specific risks as far as they could read certain reports connected to generative AI, elections. In 2024, we had all those elections including even Parliament elections, so that was important issue in those risk assessments that I've read. Meta really pointed out various mitigation measures and special tools that were developed specifically for that event. So that seems to be a focus for many platforms, but for the commission as well. And from enforcement actions from the commission, we have seen so far that commission has been very worried about this information around elections and also the so-called rabbit holes, that is something that commission focused on at least three of its enforcement actions.

So yeah, I think that those were the, and not so much generative AI, actually hasn't as far as they could tell from the press releases, but the commission hasn't looked at it. It was a focus for many platforms. So yeah, I think it's something that various overlap between what we see in the reports and the commission's actions that hasn't started.

Svea Windwehr:

I would agree. And I think another area that is quite prominent is child safety. So I think both from the commission enforcement actions but also from risk reports, we can see that there's quite a bit of focus on child safety as an overarching concern across different platforms.

Hilary Ross:

I do think across the reports there seems to be more of a focus on risks related to content, and that makes sense. And also is tricky as the DSA is not explicit, these pieces are not explicitly content regulation and instead of more focus on risks related to the design of products themselves. So I think that will be an area to watch. And I'm curious what we'll see around more focus on design of products. That was also highlighted by Peter Chapman at Georgetown and his team.

Magdalena Jozwiak:

That is also something the commission has been looking into. For example, with this proceedings against TikTok, it was actually focusing on the TikTok light and how gamification of a service can encourage addictive behaviors. So that is something that commission actually does look into and maybe platforms are a little bit less focusing on.

Ramsha Jahangir:

One of the things that you guys have already mentioned in many ways is the wide range of methodologies used for assessing risks. And that's obviously in some ways expected as DSA covers many different services. Looking at these reports and this exercise, having gone through it once, maybe the question is also what is the goal here? Do we want to understand the scope of risks across the digital ecosystem or individual platform? So how is civil society and how are different stakeholders reading this assessment and resetting the goal, if at all?

Hilary Ross:

I think the goal should be both. I think the reports are useful on their own for each company. They're also useful to compare or they hopefully could be useful to compare across each other though maybe I'll come back to that in a minute as to why that's fairly difficult right now. And then I do think they're important to understand the product ecosystem as a whole and what risks look like across the internet as a whole, or not the whole internet, I should not say that, but across this suite of products and services as a whole, that's not a distinctive answer, but I do think we should keep in mind that they can serve these different purposes, the reports.

And then I guess one kind of thing to follow-up on with that is that the methodologies themselves, the reason many, many people have been saying that it's fairly difficult to compare across the reports as they stand is, as we've said before, there's not guidance on what risk assessment should look like, which is unusual in terms of other regulatory frameworks in other industries that require risk assessment, typically there's a set of methodologies and benchmarks to assess against.

So on the one hand, the good news is that most companies aren't starting these assessments from scratch. They're building up practices that exist. Many companies were using frameworks like the UN Guiding principles in Business and Human Rights or other types of frameworks for assessing risks internally. And we saw that companies had a wide range of methodologies that they used. So some companies used their own internal methodologies, some used external frameworks, in particular, some companies used the DTSP framework, but it's hard to be able to right now do that cross report assessment because the methodologies that they're using and the benchmarks against those methodologies are different from company to company.

Svea Windwehr:

So personally, I feel like the reports can really only speak to individual platforms and not really the ecosystem because obviously these services are also, not all of them, but most of them are very different in terms of features, product logics, users. So I feel like from that perspective, it's really hard to compare them even if the reports were standardized, if they will be really difficult to compare and get a holistic view of risks across the ecosystem. Also, because in a way, the risk reports are limited in a sense, just tying them back to the original risk categories, these are limited, right?

This is not a complete description of systemic risks online. So from that perspective, these reports can only offer a very limited overview I think, and not necessarily give a complete image of what's going on online. So personally, I'm a little bit skeptical about to which extent these reports should be viewed as being able to do that or whether we should just look at them as compliance reports that exist for a specific purpose and are created in a specific context but are not, don't even attempt to give this holistic overview, which of course will be interesting, but I feel like independent research might be better able to draw that picture.

Hilary Ross:

I'm hopeful that they can serve or point to that, but I'm curious what we'll learn over the coming years and maybe you're being more practical in terms of what they actually will do.

Ramsha Jahangir:

There's also an element of politicization, right? And especially in the current context we are in, as I mentioned, the ongoing conflict between US and EU on enforcement of tech regulation in the EU. There's this concern how we can mitigate too much involvement and ensure that the DSA is enforced in a fair and objective manner. And also to some extent it's continued to be enforced. You see those concerns as well when you're reading these reports that flexibility is beneficial or it could be detrimental to the overall effectiveness of the DSA.

Svea Windwehr:

I definitely share those concerns. I think when it comes to the overall political context in which DSA enforcement happens, I think there's a real need to strengthen the enforcement and for the political actors within the commission and politics, the political ecosystem abroad need to take a step back and that the technical working level do its enforcement work independently. Yes, I think the independence of regulators needs to be strengthened in depth perspectives, which I think is actually quite crucial in order to also make sure that enforcement actions that do happen will hold up in front of court and are as independent and thorough and watertight as possible. But I think another element of that is that the DSA shouldn't be overloaded with political expectations.

I think currently there is this sense among many people in Brussels that the DSA is this sort of solution for all issues online, including disinformation, hate speech, etc. Even issues that are not even regulated by the DSA such as disinformation. So I think the more the DSA becomes complicated by these political expectations for enforcement and for the DSA to resolve complex societal issues, the harder it'll be to maintain an independent enforcement strategy, I think. Yeah, I think it's really crucial to on one side strengthen enforcement, but on the other hand, to also be realistic about what the DSA tries to accomplish and what it doesn't really touch.

Hilary Ross:

I really agree with that, Svea. I think, so maybe I'll come back to the political environment in a moment, but I do think there's an important piece of having further definition behind the risk assessment methodologies. I think some level of guidance or standards would really help with that. I do think without some level of guidance or standards, there's also more room for politicized possible regulatory overreach, even if unintentional or for companies to adopt approaches that could inadvertently actually harm users' rights. So I think that's something at GNI that we're keen to support and hope to see develop.

And if the commission doesn't move forward with developing guidance, I think there's space for additional stakeholders, civil society, companies, auditors, academics, the range of stakeholders who have expertise and a stake here to come together and develop some shared frameworks and definitions. And I'll just add onto that, I don't think I've mentioned so far that at GNI with the Digital Trust and Safety Partnership, DTSP, we've been hosting, we've hosted two stakeholder engagement forums with our member companies who are regulated under the DSA and civil society and academics who have expertise in this space. And that kind of question of lack of guidance around methodology has been a key piece. And I think that will keep coming back up if it's not clarified.

Magdalena Jozwiak:

I just wanted to add a little bit to what Svea said about politicization of DSA and this possible results that there's overreach by the commission and that eventually DSA might be just serving as this kind of tool that is used against the platform to take down some kind of content that is not even illegal and as a result really endanger the freedom of expression and be some kind of tool for undue influence. So that's one risk. And I do share that fear on the other hand, about when considering the amount of flexibility, so on the one hand we would like enforcement to be swift and commission to act very diligently, but there is this fear of overreach and some censorship and lack of transparency there. On the other hand, leaving too much leeway for the platforms, I'm also worried for that. I'm also worried that platforms will set certain tone, they will set certain way in which we think about the risks that will be very difficult to contest and will create also this kind of resignation that, oh, it's already all resolved.

There is very little we can do anyway, so now we just go on with this kind of vision of harms that are presented by the platforms, which is in itself also dangerous because the societal harms that are addressed in the DSA are so varied and so broad and connect to so many different fields of our activity as human beings and very important ones that it would be a shame to let the platform set agenda on them and decide who are the vulnerable groups? Harm to which part of society are we addressing here? Who's at risk? How do we understand those risks in this political context? So those are very questions that are not technocratic, those are questions that are political and we shouldn't leave them for platforms to decide.

So I have also this fear of platforms capturing the discussion and yeah, there will be very little energy left for challenging that. So I see double threat here and this kind of discussion between how much flexibility there should be and how strict commissions should be in a current political moment. I think it's a very tough question and something that for me is one of the big issues with the DSA.

Svea Windwehr:

I completely agree. I think you phrased that balance really well. On the one hand, you're absolutely right that it cannot be, and it shouldn't be platforms or auditors for that matter to decide what exactly risk is and how to mitigate it effectively. And that is the situation we have currently with auditors essentially then assessing whether a risk has been mitigated sufficiently without any guidance from the commission or any other regulator for that assessment. And I think on the other hand, guidelines could also end up limiting the European Commission's enforcement. They could also put forward this very static snapshot of our understanding of systemic risks and how they're best mitigated, which I think will definitely develop over time.

So I think it's really tricky what the right amount of guidance is and whether this is something that the commission should work on its own, whether the commission even has expertise to issue this kind of guidance, whether this is something that needs a multi-stakeholder approach. But then of course also there is in a question of whether these approaches are capable of delivering guidelines that are robust enough. So I think it's really difficult in this question of capture that you mentioned, I think is absolutely central in this.

Hilary Ross:

I'll just plus one Magdalena, I think really well framed the kind of two dual risks of either the DSA not actually serving its purpose of mitigating these real and serious wide-ranging different risks that do harm people online. But then the also equal risk of, or not equal, but a kind of countervailing risk of politicized wielding of the DSA in ways that do not serve users' rights or mitigate the risks for them. I do really worry about the political environment. We should be able to have good faith principled attempts at risk mitigation put in place by democratic governments that also respects users' fundamental rights to expression and privacy. And I think the environment is becoming more difficult to parse and navigate and try and do that in good faith.

Ramsha Jahangir:

All big questions and difficult to come up with the answers, but just to make the conversation a little more complicated, I'm going to get to the point of stakeholder engagement in this political environment, but also compliance environment. So, just to give an example that despite Mark Zuckerberg's framing of DSA as a censorship tool, Meta recently submitted a risk assessment in the EU following the company's recent changes to content moderation. David Sullivan from DTSP in his recent piece describes compliance with risk assessment requirements as a fact of life for relapse. So the environment has certainly changed, and a lot of you have already alluded to that, but there is a relationship between the assessment undertaken to run the business, and then there is risk assessment activity to achieve compliance with the DSA. So in this new political and compliance environment, what does meaningful stakeholder engagement look like in a way that minimizes both political and platform capture?

Hillary Ross:

I think regardless of the context, we know some best practices around what meaningful stakeholder engagement can look like. So I mentioned before that GNI and DTSP hosted, we had one stakeholder engagement forum last year with our member companies and civil society, academia and other representatives. We published a summary report from that. I should note that engagement did not include any government representatives. It was just companies and civil society. And then last month we hosted a follow-up virtual session for that same group. And again, similar audience. At the top, we did have a brief presentation from the European Commission to offer some opening remarks. And from those forums, I think we've learned a number of things and we published a summary report that listeners could go and check out. I think first we've learned that civil society and academic stakeholders who were in conversation or being consulted by companies, they often didn't realize that they're engagement actually might be informing DSA compliance.

So they were engaging with companies as part of company's more general stakeholder engagements, but companies weren't necessarily explaining that engagement could be used to inform their compliance with the DSA. And that's really a missed opportunity. I think when companies are transparent with external stakeholders upfront about what their engagement will be used for and what they're engaging on for instance, in terms of thinking about the DSA, what risks companies are assessing, mitigation's they're considering, then those stakeholders can offer more nuanced insights towards mitigating risks while also protecting user rights. And once engagements are finished, then there should be some communication and feedback, better feedback loops. So stakeholders understand how their engagements or their advice is being used.

And I think one thing we saw from the reports themselves, all the reports do discuss some type of external stakeholder engagement as is required by the DSA under recital 90. But the extent of that engagement and the specificity and the approaches range pretty widely across companies. So I think the DSA is an important opportunity to improve the overall landscape of how companies engage stakeholders in key markets where they work more broadly. I think that's something that the DSA can help companies, to me, be a place where the compliance piece of the DSA could be made more meaningful, just good business practices.

Svea Windwehr:

I think it's worth noting that at least among European society organizations, I don't think anyone felt particularly consulted by tech companies when it came to risk assessment reports. So yeah, I think there was a real lack of involvement in consultation. And I also think it's important to think through overall principles that should guide that kind of enforcement engagement, right. Hillary already mentioned some, but I think what I would like to see is a consistent and specific type of engagement. As Hillary mentioned, it's much more helpful to discuss specific issues to be made aware in advance, what kind of issues companies would like to discuss to even maybe get documents documentation beforehand so that especially society organizations, which usually have way fewer resources than these companies are able to prepare and actually will then be able to engage in more meaningful conversation.

But I think the other issue is also that, at least from my perspective, I think we've seen it's sometimes more helpful to have these conversations in a smaller group where other companies are not present. I think there is a real concern among many of these companies to share information that they shouldn't make available to their competitors. I think having these sort of smaller circles that are specific but maybe also more transparent in that way could be really helpful and something that happens consistently and not on a once-off basis to tick a DSA obligation, but to really establish a dialogue that is useful for both sides. I think that would be really helpful from my perspective.

Ramsha Jahangir:

And it's the commission contributing to that in the ways that I know the civil society has really been pushing for these structured consultations. So, how is the Commission faring on that front?

Svea Windwehr:

I know that the commission is planning to hold an event or a workshop on the risk assessment reports that will include as far as I understand both society and companies. So I think there is a genuine understanding at the commission why this is something society is pushing for. But I don't know if that workshop has been... How concrete that planning is. I think it was supposed to happen in March. I haven't seen an invitation, but I'm sure that's about to come.

Hilary Ross:

I think it's really important that there's ongoing engagement that happens in different kinds of forums. We're glad to be able to host the forums that we're hosting and they're certainly not sufficient. So different types of ways for companies to engage and for them to engage across different types of stakeholders in different forums I think is critical. And maybe one note, I think it's also that there's a key role for the commission to be playing, both in terms of facilitating some engagements. I think there could be room maybe for guidance around what stakeholder engagement could look like. I'm not sure how useful that would be, but it's something that could be considered, and I know people have been discussing.

But I also think back to transparency and transparency about roles within the ecosystem. I think there also needs to be transparency between how the commission is engaging with companies and how companies are engaging with the commission or with government. So in some of the reports on the sections on stakeholder engagement, interestingly, we did note that companies mentioned they engage with specific governments on some issues as well as engaging with the commission, which makes sense, but there should be more transparency about what that entails.

Magdalena Jozwiak:

There is a lot of this ambiguity from the commission also, and not much is published yet, at least. And even with the enforcement actions, we don't have access to full enforcement actions that only to press release. So that limits our understanding of what the Commission is doing. So I agree that some transparency on the side of the Commission would be also welcome. But coming back to this topic of consultations and involvement of stakeholders, I think it's also very important that maybe there's indeed some coordination so that the platforms have access also to different regions and different kinds of stakeholders based also, this kind of linguistic differentiation in the EU that as we saw with Romania, the Romanian election, the context, local context are so different that ignoring those differences might actually cause platforms overlooking the very viable risk or the processes or trends that are happening on the platform.

So we don't know exactly what happened in the context of Romanian elections, there are diverging reports on what was actually happening on the TikTok there. But the local contexts are very important. And I think the local civil society organizations would need to play a very important role in consultations to really be able to see risks in this kind of systemic actual way. I'm not sure to what extent there were consultations also locally in different languages giving access to maybe smaller organizations, but I think that would also be crucial.

Svea Windwehr:

Absolutely. And to be honest, I don't think those consultations took place, at least, I haven't heard from any local society organizations across Europe that have been consulted. And I think this is something that tech companies could do if they wanted to. And I think it's not secret information who these people are, who these organizations are. Many of them have been trusted [inaudible 00:37:25] for years. So I think this is really an area where we should and can ask for a lot more from the companies.

Ramsha Jahangir:

Thanks so much. Just to sum up everything that we've discussed today, the path forward clearly seems to be more transparency, more data, meaningful engagement, and less headlines. Thank you so much ladies, for your time.

Authors

Ramsha Jahangir
Ramsha Jahangir is an Associate Editor at Tech Policy Press. Previously, she led Policy and Communications at the Global Network Initiative (GNI), which she now occasionally represents as a Senior Fellow on a range of issues related to human rights and tech policy. As an award-winning journalist and...

Related

Reading the Systemic Risk Assessments for Major Speech Platforms: Notes and Observations

Topics