Home

A Conversation with Mark Surman, President of Mozilla

Justin Hendrix / Aug 25, 2024

Audio of this conversation is available via your favorite podcast service.

Last week, I had a chance to speak with Mark Surman, President of Mozilla, about Mozilla’s work promoting open source AI, the importance of competition in the tech sector, and the regulatory challenges facing the industry. Surman told me about Mozilla's initiatives in AI investment and development, and reflected on what the recent ruling the Google search cases might mean for the future of Mozilla and the tech economy. And, he shared his hopes for the future- that we can arrive at a tech economy that is not purely extractive, but rather one that respects people’s values and dignity.

What follows is a lightly edited transcript of the discussiscript of the discussion.on.

Justin Hendrix:

Mark, you are still President, I understand, of the Mozilla Foundation, but have given up the executive director role and you've brought an exciting person in to do that.

Mark Surman:

Yeah, I'm really excited about the changes. I keep my president role, which is really looking at the overall long-term ambitions of Mozilla and the different organizations we run and the communities we serve. So a lot of what I've been doing in the last few years is expanding us into new areas like venture investing. We set up Mozilla Ventures about 18 months ago and also a commercial AI R&D lab. We put $30 million into a new company called Mozilla AI. So a lot of what I'm doing is expanding in those new areas, bringing in new leaders. Nabiha Syed, who's come in as the executive director of the foundation, and she's really taking on the programmatic work I did, giving out fellowships, setting our agenda, working on campaigning, all of that kind of stuff, which has really been the heart of our work on AI, trying to shift the conversation in a different direction. So she's going to pick up that banner and keep running with it.

Justin Hendrix:

So when I was thinking about this conversation, I was almost imagining it as a kind of tour of maybe your executive dashboard, your executive concerns. I wanted to start with a topic you just wrote about in Tech Policy Press, around AI and open-source. This is a topic that's coming to a head, coming to the fore in a variety of different ways, both at the federal level in the US, also in state legislation and also as we look abroad to the EU. If we were to crack open the file on your desk called open-source, if there is one, what would you put in it? What are the considerations and things you're worried about there right now with regard to AI?

Mark Surman:

I'm worried and hopeful. Mostly I'm hopeful in that we believe as Mozilla that open-source is a key to innovation, creativity and the idea basically that people can run with and shape things in the digital world. And that's why we got excited about the web and it's what we hope can happen in the AI era, that there's not one or a few central players that define how things go, but that lots of people can start businesses or create art or do whatever weird and fun things they want. Open-source is a critical ingredient of that just as much as it was in Web 2.0, we need open-source AI. Of course, tons of the tech and tons of the science behind AI is open. That's actually the tradition of what got us here so fast. But you really saw, especially with OpenAI, a decision to close things down, like a very explicit decision to go in a reverse direction.

And some of that may be for safety and a lot of it, I think, is for commercial gain and trying to create modes. And so over the last few years, you've seen a ton of the innovation go behind closed doors and into a black box, and the leading players, OpenAI, Anthropic, others, have taken things in that direction. Where I'm hopeful is with Lama, with smaller models that have come from the big tech players, but also what's coming out of some of the independent open-source AI labs and from the hacker community. A brief pause in open-source innovation seemed to happen in the last couple of years, but we're now just seeing all this stuff happening again in open-source in generative AI, so I'm hopeful about it.

Justin Hendrix:

You wrote for us about the NTIA report, on AI openness. It had various different scenarios, risks, benefits, that it went through looking at the evidence and came out with essentially a kind of determination on whether the government should carry on investing in openness with regard to AI. You say it's time for the US government to double down in this direction, and you think that the next administration, no matter who I suppose occupies that seat, should pursue this approach. What do you think needs to happen at the federal level in the US to lead us into a more open direction with regard to AI?

Mark Surman:

If you look at the last 20 years of tech, open-source, open standards have been at the heart of creating the innovation and wealth that we've seen. Linux, Apache, web standards, they sit underneath every company that's been built and I think there's a report that came out that talks about open-source having created $8 trillion in value. It's clear that making it possible for anybody to run with the key building blocks that make up tech in any particular era, are good for innovation and good for the economy. And there are questions of competitiveness and security that people have raised to question open-source. But I think the best way to compete is to create that Lego set in the AI era just as we did in the Web 2.0 era, and let people run with it. And of course the government, the NSF, even DARPA and ARPA, historically, the internet has been fueled also by government investment in those fundamental building blocks.

We see with NAIRR, the National AI Resource, the idea that it would be underlying compute that fuels open-source and open research in the US as a really positive step. Continuing that tradition, the idea that academics or even small companies can be moving ahead in the AI innovation and have some kind of substrate of compute resources and other things, the data hopefully to build on. We believe you just need to see more of that and you're actually starting to see it in things like the 1047, SB 1047 in California, but we have concerns about parts of it related to open-source. It's got CalCompute in there, which is another build on something like NAIRR. So we do really just see that public infrastructure that lets companies, researchers, people who want to move this stuff ahead in the common interest, in the public interests is a really good use of tax dollars.

Justin Hendrix:

California, as we get towards the end of the month in the end of legislative session there, we've got a bill that could change the landscape and potentially well beyond California's borders.

Mark Surman:

Yeah, as I say that California Bill SB 1047 has some positive things in it like CalCompute and it's raising a lot of the right questions around AI safety. We want to balance safety and openness. The questions in there are good ones, but the way it's written is really speculative and particularly in the way it's written, it could harm open-source. And that it asks people who are developing tech, which in the case of open-source is often a community or a small group of researchers to predict how people might use their models and protect against those or be accountable for them. And so we don't think that they got things quite right in how to implement things. There were some changes that came out last week that I could put limitations on who would be responsible in open-source. There's a $10 million limit on fine-tuning that says if you're doing a small enough project, you're not liable. But we haven't really dug into the changes yet and we're not sure if there are enough. So we're quite cautious about the specifics that are in that law. The intent is good.

Justin Hendrix:

I want to shift gears a little bit, ask you about something else that I'm sure is occupying a large folder on your desk, which is around competition issues and maybe we'll start in the US and then maybe look abroad to Europe, what's going on there? Of course, we've just had the ruling in the Google search case that Judge Amit Mehta issues ruling earlier this month on that, and there are a lot of mentions of course of Mozilla in that document. I think 51 different mentions of Mozilla including some specifics about your business to the extent to which Mozilla revenues are so driven by its revenue share agreement with Google. How does this ruling potentially impact your business going forward?

Mark Surman:

I have to say I didn't count, so to know that there's 51 mentions is pretty interesting, although I know it's a lot. What I would just say is Mozilla has always been at the forefront of arguing for competition for choice. And so unlike any other browser, certainly unlike the mainstream browsers, you can choose any search engine in Firefox. When you type in a search, it'll actually offer, "Do you want to search in Google? Do you want to search in Bing? Do you want to search in Wikipedia?" And you can also configure what your default is. So we've always been a champion of choice and it is true, most of our revenue has come from our agreement with Google. We have agreements with others as well because that's what people want to use.

We've been doing a lot to work on revenue diversification and the amount of revenue coming from other sources, including our own advertising has really started to grow in the last couple of years. We're going to continue to double down on that revenue diversification, which is why we've done things like invest in Anonym, which is a new company focused on privacy preserving advertising, including our Mozilla AI startup that we started last year, which is focused on developers and making it easier to use open-source AI.

Justin Hendrix:

Is anything at all you could say about potential remedies in the case or are you following them very closely? Does Mozilla have a kind of point of view on this or is it too sensitive to get into?

Mark Surman:

It's more that it's too soon. So we're following it very closely, but what is going to come out in terms of the remedies are something that's going to take a little while.

Justin Hendrix:

When you look abroad, when I talk to folks in Europe, The Digital Markets Act of course, it's almost like folks are excited or frothing. There's a lot of enthusiasm over the DMA and the extent to which it may also soon create great waves in tech. What are your considerations around DMA in Europe?

Mark Surman:

The thing about the DMA is that it's a first big attempt to really look at competition from this era. Competition laws tend to get updated every generation or couple of generations as a way to reflect the dynamics of a particular defining industry. It's been around long enough and has started to consolidate. You saw that with things like telecommunications or oil and transportation 100 years ago. And so the DMA is the first tech era competition law really, and it takes into account how network effects work, how platforms work, what vertical integration looks like in this internet era.

So that's potentially a once-in-a-lifetime, kind of defining how things might work in this era. I think that's why people are so excited about it and how that plays out then becomes the question. We've already seen some positive effects of the choice screen for browsers as a result of the DMA. In Germany, I think it's like a 50% increase in Firefox usage, 30% in France, and that's just because people are being given the choice and ultimately that's what competition is and it's a key piece of capitalism. People can choose different products, so that's pretty exciting if the DMA can work and then provide a model for other countries.

Justin Hendrix:

When you zoom back out more generally about ... Thinking about the kind of regulatory moment we're in, almost feels like to some extent Europe's headed in one direction and the US is either spinning its wheels or maybe staying in one place. Are we in a kind of divergent moment between the US and Europe?

Mark Surman:

It's too early to tell really whether we're in a divergent moment. I think Europe is moving ahead faster than the US, and we've seen that in GDPR, which wasn't quite right, and then the DSA and DMA which try to take a new approach to things, the AI Act, to the core questions of this era about the relationship between the public interest and private interest. Europe has just dug into it more and that happens in any generation of any economic change is it takes a while to work out. There's the early innovators and then it becomes important enough in society, you have to figure out what's that balance between the public and companies? And then you start writing laws and it's a natural evolution of new industries. I think Europe is just further ahead with it.

And then whether the US is going to diverge in policy or just not get to policy, you look at how long it's taken the US get to privacy regulation. I mean, it's been going on for 20 years, people trying to get this done, and it's really hard to push things through Congress. And so I think we don't really know yet because we don't know what gets passed in regulation. I do know with the Chevron decision, which takes away federal agency's ability or it doesn't take it away, but limits federal agency's ability to do rulemaking or opens it up to being brought to the courts. What Congress does is going to be more important. So I certainly do hope that the US can get it together really on things like privacy regulation at the federal level or AI regulation at the federal level.

Justin Hendrix:

As we head into this election cycle, are you joining any of these kind of big tech consortia focused on AI issues, election disinformation, fears around generative AI and synthetic media? Are you getting yourself involved in any of that activity?

Mark Surman:

As a browser, which is still Mozilla's biggest product, we're not in the business of social media. We're not delivering the content. There's not that much we can do to get in the direct mediation of content. What we do is support a lot of independent researchers and we've seen from elections past and elections around the world that independent third party watchdogs are a critical part of making sure that the social media platforms do their best. And so that's our contribution is to be working with and funding researchers who are watching the social media platforms for misinformation, for good trust and safety practices. And of course as some social media platforms defund trust and safety, especially outside the US, that's a particular thing we worry about and keep an eye on. And obviously some platforms have gone much further than others in that.

Justin Hendrix:

You mentioned that you started a unit to invest in AI, for instance. Where do you think we're at? It seems like there's a little bit of a kind of wobble at the moment around generative AI in particular, some concern that perhaps things aren't delivering returns or satisfying the business sector as fast as perhaps some of the folks in Silicon Valley had hoped. Do you think that we're potentially in a bit of a bubble when it comes to AI or at least with generative AI?

Mark Surman:

The thing I would say is people are often looking at these tech questions and these market questions around AI in too short of a timeframe. We've had AI as a central part of the tech business at least a decade, maybe 15 years. It drives how social media works. Big companies have been driving an agenda there for that period of time, and I would say we're in the Ford Model T era of it. So what does it mean? Are we in a bubble? Probably related to over-indexing on LLMs and particular aspects of generative AI. Will some investors lose money as a result of that? Probably. Do people lose money as investors in the end of the dot-com boom before we had the crash? Yes. Did they over-invest in fiber? Yes. And did that stuff ultimately pay off in terms of a rich internet, Web 2.0, trillions and trillions of dollars of new wealth? Yes.

And so I think these things come and go in cycles over many years, even decades. So pieces of it are overinflated right now. Are we also at a really exciting moment of innovation that is going to create long-term value? Yes. I just don't think we all know what it's going to look like yet.

Justin Hendrix:

We had a piece recently that kind of wondered aloud about the sort of possible implications of the bursting of a bubble on policymakers, the extent to which some of the kind of breathless conversation about regulating AI seemed to come alongside the hype around ChatGPT. I know you were, I believe, one of the invitees to the Senate's AI Insight Forums last year. You think about activities like that one, like so many of the kind of round tables and white papers and other things that have happened, thinking about the extent to which those things are also tied to that hype cycle, is there any danger in the hype cycle being tied to the policy cycle? Do you think policymakers are set back by some kind of AI downturn? Do we perhaps take our eye off the ball of long-term risks in that regard?

Mark Surman:

There's no question that in the US in particular, the ChatGPT hype cycle or the hype cycle that started with ChatGPT influenced Washington and players who've been in this game for much longer than that hype cycle kind of dove in and tried to push their arguments. The ones who are pushing safety to the detriment of open-source, the ones who've been trying to put privacy back on the table, which includes us, people who are trying to drive competition. The hype cycle does shine a light and does get attention there.

At the same time, I think we're just generally at a cycle in the evolution of the digital industry of which AI is just one moment or one element that governments are learning how to regulate and balance the public and private interest in this industry. It happens with every industry. It happened with the Industrial Revolution, it happened with automobiles. There's decades of innovation where then you start to see government figuring out what's its role. We may see some things turn down the volume if there's a crash in AI, but I also think we're getting to the point where governments are just learning how to regulate tech. And I don't think that's going to go away. And I think it's a good thing.

Justin Hendrix:

We've talked about your commercial priorities, we've talked about the competition issues, we've talked about AI and open-source. You've mentioned the more general regulatory questions, you've mentioned Chevron and we talked about California. Any folders on your desk that I missed or you wish I'd asked about?

Mark Surman:

The one I would just say is I hope that we come out of this era and all this conversation about AI with as much of an ambitious investment mindset from governments as we do with a kind of rope it in regulatory mindset. And so I think things like NAIRR, National AI Resource or CalCompute are seeds for what I think of as public AI, the idea that there should be a counterpoint just to what's coming out commercially and that there should be a public lane that all of us can build on, whether we're a small company, a researcher, or an artist. And I don't think that idea gets enough focus right now. My hope is that's something that'll change and that we really, yes, we want a big commercial fast moving version of tech and AI, but I think we also want the public broadcasting or the public highways equivalent of that as we go into the next couple of decades. My hopes and my work and my fingers are crossed for that public AI option.

Justin Hendrix:

When you cast your mind forward, if you can see five years out or so, what do you hope will be your legacy looking back, both at Mozilla and perhaps more generally in the tech ecosystem?

Mark Surman:

Increasingly, I think more about the economics than about the technology and ultimately the two things are very intertwined, but what I hope is that we have a more balanced tech economy where more players can be in the mix so that there's more competition, but also that more business models can emerge. That it's not all purely extractive, that we can have business models that reflect people's values and respect people more. And that's what Mozilla tries to be. And we're a non-profit that owns a $500 million year tech business and has started more, who set up this venture fund to invest in other responsible tech companies. We're in conversations with a bunch of big philanthropic investors who are interested in there being more responsible tech companies.

I would love to see the tech economy really diversify, not just in terms of competition, but having more responsible tech options that people can pick that respect their privacy, that look at involving users in decision-making, give people more control over the choices that they can make. So I think that's the biggest thing, is a more diverse, respectful tech economy is what I hope for and I hope Mozilla can contribute to that.

Justin Hendrix:

Mark, thank you very much.

Mark Surman:

Thank you.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & ...

Topics