Home

Donate

A Conversation with Denmark's Tech Ambassador

Justin Hendrix / Apr 23, 2023

Audio of this conversation is available via your favorite podcast service.

In this episode, Tech Policy Press board member and UCLA School of Law postdoctoral research fellow Courtney Radsch interviews Anne Marie Engtoft Larsen, Denmark’s Tech Ambassador, who represents the Danish Government to the global tech industry and in global governance forums on emerging technologies. The discussion focuses on the role of tech in society, how to regulate artificial intelligence, how to accommodate non-English and indigenous languages in a tech ecosystem focused on scale, and how to capitalize journalism in the age of social media.

Below is a lightly edited transcript of the discussion.

Courtney Radsch:

We're excited to speak with Denmark's Tech Ambassador Anne Marie Engtoft Larsen, who returned to Silicon Valley after a visit to Washington for the Summit for Democracy. We first met earlier this year at a tech camp that brought human rights activists from more than 25 countries to Stanford to deepen their understanding of emerging technologies and their human rights impacts.

In recent years, many countries around the world have started appointing tech diplomats with a specific focus on technology. But Denmark was a pioneer of this relatively new approach and was the first country to appoint a tech ambassador back in 2017.

Ambassador Larsen, you're the youngest Dane to hold an ambassadorial post in Danish history. Your job is to represent Denmark in the world's most significant technology hubs such as Silicon Valley, which I understand includes building relationships and partnerships with some of the biggest tech companies in the world, as well as fostering innovation and driving digital diplomacy.

So the tech ambassador concept is still a relatively new phenomenon in the relatively old world of diplomacy, reflecting the fact that the balance of power in tech has tipped, as Denmark's foreign affairs minister put it, in the Danish tech diplomacy strategy. So I'd love to just dive in deeper into what is the role of a tech ambassador and specifically Denmark's tech diplomacy strategy, explore how technology is changing the landscape of international diplomacy and some of the challenges and opportunities that you see on the horizon. How do you define your role as a tech ambassador for Denmark, and what are your main priorities, especially in respect to the tech diplomacy strategy, which has three big prongs, responsibility, democracy and security. Where are you focusing your efforts these days?

Anne Marie Engtoft Larsen:

First of all, thank you so much Courtney for the invitation to join you. It's a pleasure to be here. I think most importantly to say around tech diplomacy, it is really an effort to update diplomacy to the 21st century. So for not only decades but centuries, we've been sending out diplomats to countries where there was something at stake for Denmark, whether it was in terms of conflict or war, big export market opportunities, countries that hold certain cultural value and importance to ours. And tech diplomacy is the recognition now that it's not only countries that are shaping the environment that we're in, that are shaping our market opportunities, our businesses opportunities, our democracy, our values. There's this new suite of influential actors, and that's tech companies.

And it's important to say they don't hold the same legitimacy as a country would. You can't elect them or un-elect them, but their products and services and how they are all encompassed in our society. They are literally with me from the moment I wake up to the moment I go to bed. They are the medium upon which the two of us are speaking right now. They are my work life, they are my social life, they are my entertainment, they are my news. They are the decision on who I match with on Tinder. I am actually married, but if I was on Tinder. They are the decision of what I watch. They are the decision of so much.

So I think the recognition that today these massive tech companies, they hold such a big power and influence over Danish society, Danish democracy, Danish markets, Danish opportunities in the future, some of them unparalleled with I think what we see traditional countries are having.

So tech diplomacy is really about recognizing that we need to engage diplomatically with them. Because it turns out that the companies on the American West Coast, they don't go up and go to work every morning thinking about, what is the most important thing for a Dane? How do I develop a product that is meaningful to a Danish citizen? What are the strong values of whether that be freedom of assembly, freedom of speech, freedom of religion, freedom to think, empowerment of the individual, whatever might be specific value that are to specific cultures and jurisdictions. They don't think like that.

So it's my role as an ambassador, as a diplomat, to engage with them and talk about shared interest, shared values, and ensuring that while we are also I think finally stepping up to the task of regulating tech companies that's gone unregulated for decades now, we've also got to recognize that we're not going to have the perfect regulation tomorrow. So until we get there, the diplomatic engagement to talk about what is important for us. And so even though they need to live up to the laws that we're making, we have higher expectations of their societal responsibility.

So that's what I do. I do that with a fabulous team. We're based here in California and in Copenhagen. We have a network of about 20 embassies across the world where technology is having a big influence, or they're big tech hubs. Because with a global mandate, I am based here in Silicon Valley, but it's not only here that technology development is taking place, and the recognition that it's influencing not only Denmark but the rest of the world. So we work with our embassies in Singapore and Indonesia, in India, in Beijing, in Ghana and Kenya, in Mexico City and Brussels and Paris and London and Berlin and all over, to ensure that ...

And I think this goes to your question, what are the most important thing we do? The most important thing is to ensure that technology actually delivers solutions to the greatest challenges of our time. Otherwise, what is really the purpose, if technology is not emancipatory, if it's not liberating us as individuals and humans, if it's not building more sustainable societies, more resilient societies, better opportunities for access to better education, healthcare, the green transition, renewable energy, transparency, accountability, human empowerment. We do that, as you said, through the Danish strategy on tech diplomacy. It has three core pillars.

The first around responsibility is really working on what does responsible digital innovation look like? But maybe more so, what is irresponsible? What does it look like to build responsibility at the core and having values around responsibility, transparency, accountability to be a feature and not a bug in the technologies that we're developing.

The second is around democracy. I came of age in the 1990s and that was a time when the world was going towards ever more democracy. There was such a positive note of globalization and a deeper connectedness globally, and now we're seeing a reversing trend. I think we're seeing the multi-polarity globally. We're seeing democracy is far from thriving. It's being undermined in many places around the world, our position is having a hard time.

So if we are to uphold those values that so many people lost their lives for during the 20th century and the 21st century, and make sure that human rights, democracy, rule of law remains features of the 21st century, we need to build that into our technological systems and the technologies we use.

And then the third pillar is around security, and that's really from national security, cybersecurity. The dual-use nature of many of these technologies is that it can both promote peace, but it can also very much promote the opposite. And working with the tech industry to ensure that we harness these technologies for building more peaceful, resilient societies rather than the opposite.

Courtney Radsch:

You raised a couple really critical points, which is the importance of embedding some of these values in technology and the double-edged nature of technology. And I guess a couple of questions come to mind. One is, are there some technologies that are fundamentally incompatible with human rights and democracy? I've been working recently thinking about digital authoritarianism and when you get down to the root of the technologies that are being used for digital authoritarianism, they look a lot like what we have in democracies and the real dividing line there is how we govern that, the privacy laws, the democratic oversight, et cetera.

So are there technologies that you think are fundamentally incompatible? And then also, what are you hearing from these 20 different countries around the world where you have embassies, talking with all sorts of different types of populations with different concerns, but also probably a lot of the same concerns, and wanting to harness the amazing opportunities that technology brings?

Anne Marie Engtoft Larsen:

As you say, Courtney, most of these technologies are dual use. So the same technology that can give you privacy can also be changed slightly to do the very opposite. For me, what's been increasingly important I think is to fight for the right to privacy. And that's whether you are living in a EU country or elsewhere. GDPR, so the General Data Protection Regulation in Europe has really been focusing on the privacy of the individual and how our data is harvested and harnessed and used. I think upholding and ensuring that end-to-end encryption in the platforms and messaging apps that we're using globally.

Often there seems to be this impossible relationship between security and privacy. I tend to think of them as a continuum, and how can we ensure that we are addressing security concerns but without compromising on the privacy of the individual? Now in large scales, we talk about how harnessing AI, whether that's for healthcare, social services, all these opportunities, we need to ensure that we can anonymize data securely and consistent, and ensure that that remains that way over time.

A lot of our focus is now on also cyber crime, is how do we protect the individuals in that? And that's from a security perspective. But part of that also goes to the privacy of the individual.

What are some of the concerns I have around other specific technologies? Look, I think technologies are not neutral. They are deeply embedded with values in the way that they are designed and developed, disseminated, applied, regulated, evaluated, who have access to them, who don't have access to them. So the way that we design them and do that in collaboration between democratic governments, like the one that I represent, and tech companies who hold the same values dear, we can actually design technologies that is not facial recognition technology that will be used to increase suppression of civilians. We can develop technologies that will not increase the dissemination of this information and sew seeds of mistrust in societies. We can do the opposite in the way that we're designing and implementing them.

I think all of that goes to say that really got to think about the systems of technologies that we are creating. We got to think about the collaboration that we have between civil society, governments and businesses. And while we come to the table with different interests at times we have shared interest in making sure that this did not become a stronger tool for the authoritarians because it is right now. We see how authoritarian governments are closing down the internet if they don't like the opposition in how it's using the internet. We see how they're using surveillance technologies to prey on the privacy of individual citizens. And so reversing that trend and how we develop those technologies, who we sell them to and how they're used, I think it's up to us.

Courtney Radsch:

You mentioned privacy and it does seem like that is one of the core values that differentiates the European approach from the American approach and certainly the companies that are based in Silicon Valley pioneered the data economy, what Shoshana Zuboff calls the surveillance capitalism. And I'm wondering how far are you getting in your conversations with tech companies because it seems like a lot of the differences there are fundamentally about business models. Meanwhile, in many other countries you have efforts for example at rolling out digital IDs and digital services that depend on knowing digital identities, which raises a different type of privacy concern. So privacy seems like ... is that one of the biggest things that you're working on and where are you getting with Silicon Valley on that topic?

Anne Marie Engtoft Larsen:

Privacy is definitely taking a more important role than it's done previously and part of that is thanks to GDPR, the European regulation of privacy and data. I think the work that Shoshana and others have done around shedding light on the business models and the challenges around some of the business models have allowed for a new type of conversation of how much data is necessary to have on an individual in order to deliver a good service or a good platform or a good software.

Speaking of authoritarian governments and how we can, I think, have a better answer, how do we develop AI services that needs less data? Where you need to know very little about you to give you a meaningful experience? And I think that we're starting to see it turning up a tide within the tech companies recognizing that we don't want tech companies to know about my political preferences or sexual preferences or deepest inner secrets. That's not necessary for me to have a meaningful experience on a video viewing platform or a social media engagement or what to buy on Amazon.

And so by changing the narrative, and I think that's where we want to go with the tech industry and what they're opening up for us, also, look, saying, you need to know less about us because we are uncomfortable with you knowing this amount of information. Also, because we can't singly back on the fact that it is not just about who do you trust, but it's about building systems that we trust in general. And so whether you are using this platform from Denmark or from California or from Myanmar or from Australia because these platforms are, they're multinational, they're global in scale. And so by coming from a democratic country, developing more technologies where the business models is dependent on less information, personal information about you, higher privacy and integrity. And we see how companies like Apple, I think, are really maturing in this space and choosing 'more privacy is good.' We are a privacy company and we see that also faring really well with customers.

So all that is to say I think we're starting to see a change to some degree. I think we still have some legacy in some of the business models that are dependent on a high degree of personalized information from the individual. We're hoping to see much less of this. The new EU regulation is also going very close and saying for young adults and for kids under 18, we want to have as little information about them as possible.

And then from a government perspective, how can we build what Francis Fukuyama I think calls middleware? Some of that might just be a digital identity where you don't need to know anything about me, the only thing you need to know is whether I'm over 18 or under 18 in order to access a specific service. You don't need to know my gender, my location, what I've been buying or searching or doing online. And one of the conversations I have with the tech industry here is, there is a moment right now where in the terms of development of new platforms, of new services, of new AI, how can you invest into building AI systems that fundamentally needs less information about me in order to deliver better products? Because that is where I think individuals and citizens in liberal states want to see the tech development go.

Courtney Radsch:

That seems like the right approach for where we are now with technology, especially platforms, but we are now entering the era of generative AI, which is built on these vast data dumps and scraping of the web and other sensorial data, all sorts of data. And some research by internal experts at a company that were then fired and not allowed to publish it showed that you have to sacrifice either security or accuracy, privacy or accuracy in these generative AI models. And you've talked about at the tech camp, you talked about some of your concerns about recent advancements in AI and the impacts they're going to have on attention, engagement, invasiveness. You had mentioned mind reading. That seems to go in the exact opposite direction of the GDPR, like yeah, you might not know my age, but you know my thoughts. So can you talk about how you're thinking about generative AI, where does that fit in? What are you working on in this respect and what are you hearing from Silicon Valley?

Anne Marie Engtoft Larsen:

It feels like living in an age of exponentiality when it comes to generative AI these weeks and months. From when chat GPT was something we played around with just before Christmas and early New Year's, less than three months ago. It was fun and exciting and now we are seeing the massive, massive use cases and massive influence and in, I want to say exponential growth of this technology and that certainly I think brings both incredible opportunities but also massive disruptions and because of the potency of that technology, I do think that erring on the side of caution, not to be Luddite or not to be anti-tech, but simply saying that right now in the EU we're negotiating AI regulation, we got to have proper time because when 27 member states covering more than 350 million citizens in 27, different governments need their own processes, we cannot and we should not create regulation within days or weeks or even months.

It is a bureaucratic and slow process because we designed it that way for people to have voice and influence, for civil society to have a meaningful engagement in these processes. And so while we are building this new AI regulation, we're seeing this massive expansion of a very powerful AI models that are coming close to general purpose technology. I think it's a good time now to slow down a little bit and ask ourselves the meaningful questions of how do we want to integrate this technology into our society? What are the high risk areas of use? What are the less high risk areas of use?

Two months ago, I think we had a hard time foreseeing how generative AI could do more than create funny speeches or monologues about oneself. Dolly coming up with a funny picture based on an interesting prompt to now looking at how they are used across workplaces, university, schools, newspapers, articles, lawyers in so many professions and really profound, we've been seeing a lot of the deep fakes that's been coming out. They are getting really good. They're getting really close to what looks like the truth without it being the truth. And so in that respect, I think erring on the side of caution, having consideration for how we employ them and getting, from a policy maker's perspective, a full visibility to what is the power of these technologies. How can and in what areas do we need to be specifically focused on?

And then I think it's okay for us to say we're go, we're not going to implement them across all sectors yet. Some of the business to business, on optimizing energy use in your warehouse, if you can use GPT 4 for that, great. When it comes to changing how we're doing our basic welfare systems and benefits, how we are operating on, in the legal domain, whatever might be where there's a human implication and where there could be concerns around individuals and citizens, we got to be much more aware of what are both the opportunities but also the risk and perils of these technologies.

Courtney Radsch:

It feels like there's a tension between the European approach, which I really appreciate as an American where it feels like a lot of our policymaking is dominated by lobbyists and special interests, whereas in Europe it seems to be a much more consultative and deliberative process that can actually lead to things as we've seen with the Digital Services Act and the GDPR you mentioned, but it also feels like there's a tension. As you said, we're in the era of exponentiality and just a couple of months ago, very few people were really understanding just how exponential and impactful this technology would be. Universities, law journalism, et cetera. So do we have time to regulate? We haven't even managed to figure out how to regulate social media platforms and meanwhile they've completely shifted geopolitical dynamics and democratic elections and non-democratic elections alike. Can we regulate these meaningfully or are we just going to be left trying to work around the edges and make sure that if an AI system is deployed in a social welfare decision or law enforcement, et cetera, that there's some awareness.

It just seems also like that's insufficient. I just listened to a story about someone who was arrested in a state, to be extradited based on a facial recognition match on a video, an African American who had never been to that state and he didn't even know why he had been arrested. Is this technology going to move too fast and get ... it is already getting incorporated into business models and services? Are we going to be able to regulate it? There was that letter from a thousand tech executives including some of the top names like Elon Musk, et cetera, who have a vested interest in using these technologies, about the need for a moratorium. We've got Italy restricting chap GPTs use in that country. It just seems like we're all over the place. So how do you balance that need to address the very real, very acute impacts and the need as you, I think, rightly suggested for a deliberative and informed approach to regulation?

Anne Marie Engtoft Larsen:

First of all, I think it's important to say that technology still, not yet at least, does not move by itself. It is directed by human beings. It is a human being that decides whether or not you're going to talk to a doctor or a chat GPT. It is a human being deciding whether when you call the government or the municipality to talk about whatever verdict you might have gotten on social services or benefits or whatever it might be, or engaging with your child's daycare, we decide whether that's going to be mediated by chat GPT and GPT, for systems or individuals. And so I just think it's, in order to remain hopeful that of course we can regulate this. Of course. It just requires us, one, to do so and, two, to set the guard rails on saying ... until it is regulated or until then we have set the appropriate guard rails.

There are some cases where you can't apply it. There are some cases where you need to have maybe multiple series of verifications or approval. There are scrutiny in how decisions are made. There has to be a human aspect of a decision making in terms of some of this. So you can't just ... in that case you just came with, I think in our judicial system, we can't just rely on AI decisions in and of themselves. There has to be explainability in that. There has to be just ensuring that there's a human reviewing those decisions at the same time.

And so all too often it does feel like this conversation becomes this ... there's a train coming 250 miles an hour and either you can jump on board and be part of it and just go with the ride or you can stay back feeling incredibly left out of the loop. My role as a tech ambassador and I think the role of us representing and serving as a link between industry and policy makers is to make sure that that is not the case. We can and should direct them.

And yes, we outsource tech development to private companies and they're doing tremendous work in that sense because it is really extraordinary what we're witnessing these days. However, it is still up to governments and the people they represent to set the guard rails for how technology are shaping society and what type of society we want in making sure that they do become meaningful additions.

I don't think we have an interest in ... over the past feels like 10 years now, we had a conversation about how much is AI and robotics going to replace of human labor? We have millions if not billions of young people around the globe, particularly in the global south that are looking for meaningful jobs where meaningful jobs means that it is a way out of an informal economy into a formal economy.

In my own country, youth employment is so incredibly important for feeling a meaningful part of society. It is about going from a child to adulthood. It is about being part of a society and a community. At the same time we have labor that's incredibly challenging on the body and what we want to do is see, use technologies to alleviate that. But we don't want to render jobs obsolete. I really prefer to engage with a human being talking, whether it's on my medical records, whether it's about my child's welfare, whether it is about any sort of, if I need to see a lawyer on personal matters, I probably want to talk to a human being and not a machine. So it is up to us to decide how we integrate these and yes, there might be use cases where you could use it for something that could replace a human being, but I don't think it's necessary and I don't think it's something that we want.

So what I see right now is that it is excellent that people are locking online and trying chat GPT, they're trying Dolly and all these new types of generative AI systems to get a sense of what they are without all of us being software engineers. And then at the same time, insisting whether ... the Danish minister for Digitization will be visiting Silicon Valley in about a month from now to do exactly that. To talk with the industry that are developing these new AI models and saying what are the expectations from the perspective of a Danish politician representing Danish citizens? How do we see the opportunities of generative AI for positive meaningful engagement in our society and what are some of the risks that we want to ensure are addressed?

Courtney Radsch:

So that brings us to I think a really interesting part of Denmark's AI strategy because it explicitly deals with an issue that affects Denmark as a Danish speaking country, but also many countries of the world that use what are considered low resourced digital languages and the use of the digitization of language and their use in training models is really important for where we're headed with generative AI.

So can you tell us more about the common Danish language resource, more about this pillar on open public sector data for artificial intelligence? Our systems are only as good as the data that they get. And to what extent are you hearing from the companies in Silicon Valley any interest in thinking about Danish, thinking about how to make these resources better, more effective, more accurate, et cetera, in some of these low resource digital languages?

Anne Marie Engtoft Larsen:

This is something that's very, very close to I think both my heart and the role of us representing a small country. It goes a little bit to the question you also had before. What's the difference between the European approach and maybe the American approach to this?

I think first of all, Europe is more regulated on average than the US and there are many reasons for that and we can dive deeper into that. But when it comes to technology, part of it is also going up. We are many different states in Europe, very different languages, very different cultures coming from one of the smaller states among us, knowing how much our culture and with that the social fabric of our society is dependent on an integrity of our own language and how that language has deep roots to our ancestral history, to how we understand each other and how we understand ourself in the world, it's important for us that it's not replaced with just the technology systems that are based on English and then there's Danish representing the old world where things are very analog.

So what we do in the Danish AI Strategy and our own approach to it, one, we recognize it's probably not a lot of other countries out there that's going to make Danish data sets available. So we will do that ourselves and making sure that we do have a resource and we now have a digital platform, let's call it languagetechnology.dk, which is about giving access to resources in Danish data sets for training purposes of AI systems. We worked with a couple of the big platforms, so the Kingdom of Denmark, it's not only Denmark but it's also including the Pharaoh islands and Greenland. Greenland has I think about 30 or 40,000 inhabitants and Inuit language.

So very, very small group of citizens but very, very influential, important and even more important I think also that you can connect with your language because language is part of a culture and it's not all of the tech platforms that have their app stores available in Inuit language. It's not all of them that've been able to accommodate. And so that's been a really big priority for us is to work with the big platforms to say there is a societal responsibility and it goes back to that pillar in our strategy that's about the societal expectations and responsibility of large tech platforms. It might be that the biggest money can be made on English or Spanish or French-speaking aspects of their platforms. For Danish, for the 6 million Danes we are, or Inuit, for the 35,000 citizens in Greenland, it's so important for those technologies that they're developing to be a meaningful addition to our society and to be viewed positively for that sense.

And so if we ... The same can be said for content moderation. There's been a lot of focus on content moderation for the past couple of years. We still see, I'm very concerned that you have a lot of content moderation and a lot of moderators on English-speaking language leading up to say the US elections next year, very few when it comes to languages that not many people on the US West Coast are speaking.

So how to make sure that whether it's, for me, the democracy in Denmark is very dependent that there are some content regulators that know how to speak Danish and understanding Danish language the same as is that for I think a whole suite of languages around the world where there's not that much focus on ensuring that the technology is living up to them. I do think, and maybe just in conclusion on this question, I do think it's positive to see how to use generative AI to produce data that doesn't exist.

Artificial data is becoming a thing now and ensuring that we have, our advances in language technology are so incredible that we can include much more languages in the technology systems that we have. There might not be a strong business model because they are smaller jurisdictions or smaller groups of consumers, but from a value standpoint and from a purpose of, if you want to have your technology be meaningful to many people around the world and a positive contribution to society, you need to be focusing also on small languages not only in the basic services you're providing, but also in the content moderation, in the platform governance and operations.

Courtney Radsch:

Do you think that with some of these small languages it should be pretty easy to figure out who for example produced some of the training data that's used in these generative AI systems. And Tech Policy Press interviewed an indigenous person, Michael Running Wolf a few weeks ago, who talked about the challenge of incorporating indigenous culture in data sets used for training generative AI and just more broadly, this question around copyrights and patents. Who should own or be compensated for the use of data in the training sets. So I was wondering, how is Denmark approaching that and how are you thinking about the copyright-ability and patentability of things that are produced by generative AI, especially since you've got this first mover, big platform advantage for those who already have access to data and training data?

Anne Marie Engtoft Larsen:

There's certainly a huge question around copyright and ensuring that people are rewarded and reimbursed for the work that they're producing. There's a classic example of the iPhone, is really a product of public funded technology that's been driven from NASA and public research institutions and whether it's a GPS or the internet or the camera or the GPUs or whatever it might be in the phone and then a private company and makes it a really, really beautiful design, really intuitive, really operational, their own operating system, but nevertheless built on technology that are to some degree, it could be argued, digital public goods paid by the collective. As we're moving into a new era where we want to see large platforms being much more focused and cognizant of ensuring that smaller languages are represented, that minority culture in whatever shape or form is represented in their models, those individuals or communities or governments delivering the data sets or the inputs for that are compensated, in particular when it goes to communities and minorities.

I think we can talk about Danish public investments into digital infrastructure. That's part of how our society and our welfare model is functioning and it's always been like that. So providing a critical digital infrastructure in a public private manner is something we're quite used to and I think we have ways for ensuring that. But when indigenous groups, for example, are providing their language through data sets or whatever sort of medium, it might be, they need to be compensated. If we only think about technology as being defined by six or seven of the largest platforms, and then they are the ones to drive technological development, technological infrastructure and that technology that is supposed to deliver meaningful answers on an array of challenges in society, we reduce technology to something that is flawed. I believe that by thinking about digital public goods for example, how can indigenous communities develop their own technologies?

How do we think much more about open source platforms where you can run your own datasets based on your own languages, and out comes amazing technology applications on the other side. So digital public goods, open source approaches to software, public private collaborations. So we don't reduce ownership to only five companies and then it's a negotiation with them and an individual where there'll often be a power asymmetry on whether or not you are compensated or not, but allowing for a much more open co-creative data economy and I think a technology economy where we don't become just the takers, but all of us become a little bit more of the tech makers.

And I know it might sound like ... it's a bit of a utopian idea that that can be possible, but I do think that these original ideas and how we've been seeing open source and a lot of what, even with generative AI now, how people are using it and creating their own systems based on the ... creating their own applications is much more positive for thinking about future tech development rather than, it always has to be a negotiation between a small party and a large tech conglomerate where it almost seems as a David versus Goliath in terms of ensuring that you get adequate funding.

We've been seeing it when it comes to the news media bargaining codes. So how do you compensate media for the journalism ... journalistic integrity and content that they're producing, but that's been used by the large platforms. I don't think we've fully figured out the best way yet. There's a compensation model now that's taking place in Europe. It's being integrated across the different member states in the European Union and so far it's the money that's being transferred between large tech platforms and news outlets, it covers maybe one journalist per news outlet per year. That is not really changing significantly the business model. It's not really supporting media integrity and a free press, so we need to think much more about how do we create much more open collaborative co-creative systems in that sense.

Courtney Radsch:

With the EU approach with to copyright, rethinking copyright I think differs from, for example, the Australian approach, which is more about collective bargaining, et cetera. There's a lot to be thought about there, especially as we see Canada and maybe the US reintroduce legislation and threats by some of the major social media platforms to cut off news. Do you think that the platforms, just briefly before we wrap up, should the platforms be in the business of deciding which news outlets get to be on their platforms or not? And I got to ask you, since we're seeing this whole flare up with Twitter, labeling national public radio, state controlled media and then BBC, and relabeling them as government funded, this seems to have reemerged into the public sphere. How are you as a diplomat working with Silicon Valley to think about their responsibility to support a free, independent and sustainable press?

Anne Marie Engtoft Larsen:

I think really part of the answer was in your question, Courtney.

No, of course, it shouldn't be up to the big platforms to decide which free media should be on their platforms or not, and I think it is a worrying trend that we're starting to see that. When there was a dispute between Australia and Facebook or the parent company Meta around how to organize and especially how to bear the cost of the content created on their platforms ... and so we saw Meta briefly cutting off most of the Australian media for a short period of time a few years ago, really shown the immense influence and power these platforms have. Yet at the same time as how dependent they are on the news creation that is coming from independent media. So I think whether we like it or not, we need to recognize that there is a very, very intimate relationship now between traditional media and social media and they are both dependent on each other.

Traditional media ... We also got to recognize that we have a challenge in our society, which is that most people prefer to watch, sorry for ... stupid TikTok videos rather than reading a well-researched, well-informed article on New York Times.

So part of it goes on us, I think as governments and citizens and individuals, how to make sure that we're not reducing our attention span to dance videos and what some might say at times meaningless content, to actually engage and have the bandwidth and brain capacity to reap proper news. That's one.

Two, making sure that this very intimate relationship does not become antagonistic, but that they need to start seeing that this is not a competition between two parties on who gets what share of a product in terms of monetary value, but thinking about a shared opportunity for providing truthful high integrity content information to a broad public.

Between the traditional press and social media, that is really the new town square. It's the backbone of modern democracy and it is the key way for how we influence our thoughts and opinions, our democratic dialogues and conversations, our disagreements. And in societies that are increasingly bifurcated, if social media and traditional media don't find a way to try and solve this all together, they're both going to be, I think, culpable in what can be a deterioration of our society in that sense.

The role of a tech ambassador in that is trying to create a platform where we can actually have a conversation around that. And try to see it not as opposites or antagonists, but try to see what is the shared interest, which is about providing trustworthy information in societies where we can uphold the integrity of truth and facts, where we can have meaningful disagreements, but without falling into a camp of hating each other, spreading not only disinformation but straight out lies.

The role of conspiracy theories in our society and how it's taking such a strong hold in very ordinary citizens who used to be not falling into those traps. I think it is a shared challenge and I find there to be a more mature conversation around that with the tech industry, also with the traditional media. I don't think any country has fully found it yet. What I do think, and that's the European approach to regulation, there is ultimately a political desire and I think much more a confidence in Europe that, yes, you can and you should regulate tech industry. We regulate every other industry, so this is not something ordinary. We regulate everyone. That's the European approach. It's by having governments and the citizens they represent set strong guard rails for how we want technologies, industries, private companies operating in our societies. There's nothing out of the ordinary.

It's not an anti-American tech agenda. Quite the contrary, it is one where we believe that many of the products in services coming out of Silicon Valley, they have brought tremendous opportunities and added great value to our societies, but it also come at a cost and now we've been a little too slow regulating. Now we are getting up to speed and we have I think a strong and a confident European Union and member states who are willing to say, we will not have democracy deteriorated over this. Europe's history for the past 120 years is nothing but wars over democracy, sovereignty, human rights, the ability to self determinate, the ability to have strong democracies where minorities are protected, and we will not have that the baby thrown out with the bathwater in the 21st century, because we were not focused on making sure that the technologies that have become so omnipresent in our societies, not only live up to, but are delivering real value on those priorities.

Courtney Radsch:

Well, Ambassador Larson, thank you so much for that really thoughtful and nuanced analysis of where we are in this present moment and where we're trying to get.

Anne Marie Engtoft Larsen:

Thank you so much, Courtney. It was a wonderful pleasure being here.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & ...

Topics