How to Confront the Threat of AI Dictatorship
Justin Hendrix / May 10, 2026Audio of this conversation is available via your favorite podcast service.
Is the future something to be calculated and controlled, or something we shape together through democratic struggle? How should we read the convergence of Silicon Valley's "Dark Enlightenment" thinkers with a resurgent authoritarian right, and is Europe truly reckoning with what has shifted in the United States? What is driving the continent's anti-regulatory mood? What counts as "evidence" sufficient to legislate a fast-moving technology, and at what point does the demand for proof become a license for the catastrophe to arrive first?
I addressed these questions and more with scholar and former European Commission official Paul Nemitz, who is one of the authors of a new book titled The Open Future and Its Enemies: How We Can Protect Free Society from AI Dictatorship. The book argues that three decades of under-regulation have produced the concentrations of wealth and power we now confront, and that the survival of democracy in the digital age will depend on citizens, civil society, and a new generation willing to treat their work as carrying responsibility not just for safety, but for fundamental rights and self-government.

THE OPEN FUTURE AND ITS ENEMIES: HOW WE CAN PROTECT FREE SOCIETY FROM AI DICTATORSHIP. Foundation for European Progressive Studies, 2026, by Matthias Pfeffer, Jürgen Pfeffer and Paul Nemitz.
What follows is a lightly edited transcript of the discussion.
Paul Nemitz:
I'm Paul Nemitz. I'm a visiting professor at the College of Europe in Bruges where I teach a post grad seminar on AI law and data protection and privacy law. That's my retirement activity. I have been for 30 years an official of the European Commission and the Director for Fundamental Rights and Citizens Rights. And recently I've published a book together with my old study friend, Matthias Pfeffer, who is a journalist and philosopher and Jürgen Pfeffer, not related to Matthias Pfeffer. Jürgen is a professor of informatics at the Technical University of Munich. He previously was at Carnegie Mellon. And the title of the book is The Open Future and Its Enemies: How We Can Protect Free Society From AI Dictatorship.
Justin Hendrix:
I'm excited to talk to you about this book today. But before we do, I want to hear just a little more about that career, a little more about what brought you to the point you're at and to being able to put together a work like this. Just give us the brief history of your career, where you got started, how you ended up at the Commission?
Paul Nemitz:
I studied law at the University of Hamburg. I actually also was twice in the United States, so I know America quite well. Once was 16 as an exchange student for half a year in Quincy, Illinois on the Mississippi, close to Hannibal and St. Louis. I loved that time there. That was 1978. And then again, as a Fulbright Scholar, I did a Masters of Comparative Law at George Washington University in Washington. And so I'm a career official of the European Commission. And since I'm also a political person at a certain point in my career, I let's say made it above the glass ceiling. Namely, I became a political civil servant and appointee directly by the commissioners and the college. I was a member of a cabinet of a Danish commissioner for development aid. I was for many years a trial lawyer for the European Commission in the Court of Justice of the European Union.
And in the last 17 years, I held positions in the area of justice and consumer protection. And in the justice field, for example, I was the lead director for getting on the books the General Data Protection Regulation, famous GDPR, but also a number of codes of conducts which predate the Digital Service Act, for example, the Code of Conduct on hate speech and incitement to violence on internet, which we concluded with the big internet companies.
And I was also the final thing, the Snowden Investigator of the European Commission together with others, but I was a co-lead of the delegation, which investigated the impact of the Snowden revelations on the European Union and its citizens, namely mass surveillance and so on. So let's say in the last 15 years, all my work was in the triangle of democracy, law, and technology.
Justin Hendrix:
Well, that certainly gives my listeners reason to sit up, pay attention as we talk through this book, which is called The Open Future and Its Enemies: How We Can Protect Free Society From AI Dictatorship. I guess I'll just start because that is a sharp subtitle there. Folks might think just on the face of it, this is a kind of doomer book, just looking at the cover and looking at the way that things are phrased. Is this a doomer book? Is this a book that is negative in its general orientation? How do you think about the mood of this book, as it were?
Paul Nemitz:
Well, I think every new idea is born out of a certain criticism and discontent with the present that every technological improvement, every technological innovation is basically coming from an idea of what could be done better. And that's what this book is also about. It's absolutely not a doomer book, on the contrary. It says the future is open. We are shaping it all together, meaning that it makes sense to engage in democratic processes in technological innovation, but also in societal debates in civil society and politics because we together shape the future. It is not predetermined. It is not inevitable. It is certainly not calculable in mathematical terms or predictable totally by AI.
The message is, guys, get engaged. And it is a particular call not only but in particular to the technical intelligence here, those people who work on technology in the many different ways people work with and on technology to reengage with democratic process in a constructive way, because we at least believe that innovation has not only a technological side, also democracy is a highly innovative way of governing ourselves. And this way of governing ourselves, we want to maintain for the future, we want to see artificial intelligence and democracy thriving in the future.
Justin Hendrix:
I want to get into this notion of the future a little more. I mean, you spend a bit of time talking about the idea of the future is fundamentally unpredictable. You bring in Heisenberg, Turing, Gödel, others. You say early on that, "Today the future is contested by futuristic technology. The ability to calculate and thus control the future is central to the fantasies undergirding AI development." I underlined that in the introduction of the book because that feels like something that hits. There's a kind of fundamental anxiety going on among all of the creators of artificial intelligence. They're at once telling us that this technology is going to make the future unpredictable and unknowable and that this technology is the way that we can shape and control and harness that future. That feels very true to me.
Paul Nemitz:
Well, I would say our argument is that those who claim that the future will be alone shaped and determined by technological innovation in particular by AI and general AI, they have not understood two basic facts of life. One, in physics, in quantum physics in particular, there is the famous mistake of Einstein who thought that everything in the world is rule-based. And from quantum physics, we have learned that that's not the case. We cannot predict how the quantum will behave in the next moment.
In physics, we have this element of unpredictability and uncertainty. And in the human mind we have the ability to imagine, to imagine things which don't exist. And this imagination is very open. We cannot predict what humans will imagine tomorrow. And these two openings, the open future are a call on us to shape the future together, not to leave the shaping only to, as Lawrence Lessig once said, the 200 engineers who determine how we live in the future maybe, but to shape it in democratic process, meaning that we all participate and yes, that we make rules for technology where this is necessary.
At least in Europe, we have a very good history in regulating technology. It starts after the Second World War, actually together with America on the use of atomic power. Atomic power is often quoted as something which is like artificial intelligence. I remember, and I think we have quoted here Elon Musk and Bill Gates who said, "AI is like atomic power." Well, the civilian use of atomic power after the Second World War was accompanied on the global level with the creation of a legal framework and an institution, namely the International Atomic Energy Agency, which had very wide inspection rights inside countries, so international organizations which can go into countries and check what's happening on atomic energy. And it was accompanied and actually the putting into business into operation of civilian atomic energy installations was predated by adoptions of laws which regulated the risk of this technology.
And I would say that was a good experience even though atomic energy later had a catastrophic impact and some countries like the one I know best, namely Germany have said goodbye to atomic energy for that reason. But what we want to show in this book is that the drive against laws which govern AI, which come from the Trump administration, who even want to forbid American states to make law, which come from demagogues like Peter Thiel who say that those who want laws and those who want to strengthen the United Nations are the antichrist, which of course for me living in Rome near the Vatican, and I'm not a Catholic, but of course as a formulation which makes me listen up, we want to say that they are on the wrong way. This is really very ideological thinking, very egoistic thinking.
These guys, they only think about enriching themselves. Our analysis is that their worldview is that very few rule, namely a few autocrats in politics, a few oligarchs in technology together run the world. And our vision is, no, the open future is a future which has to be shaped in democracy by all of us and this requires that we all engage in democracy. Of course, if we all lean back and do nothing, well, then others take over and that's not a good thing.
Justin Hendrix:
You spend a bit of time on this ideology, the dark enlightenment characters that we've talked about on this podcast in different ways, Curtis Yarvin, Peter Thiel, Marc Andreessen, the lineage to people like JD Vance, et cetera. In your view, maybe from the perspective of Europe, which feels like the dog, which feels like the tail here? Is it the far-right wagging tech or is tech somehow just a vehicle for the far-right?
Paul Nemitz:
Well, Kara Swisher, I liked her work and her book, the Burn Book. She starts the book by saying, "Well, it's capitalism after all." And she describes how tech started in a very idealistic way. Also, she was a very idealistic reporter on tech accompanying the developments. And then already in the first Trump administration, there was a third lineup and in the second Trump administration, even more a basic coming together of Trumpism, also even Trump as a person and the leaders of big tech. And so I think our message is it is naive to only talk about the technology. We always have to talk about the technology plus the ideology of those who propagate, who develop and who make money with this technology. This has to be seen together because in the end, AI is a power technology and it is used to gain power over people and power over political systems.
I think that's one of our main messages. Let's not continue this naive debate about the great potentialities and so on. Let's look at the realism of who drives this forward and what their agendas are. And let's read their books and articles and understand the intentions and let's take that serious. And only if we look in a holistic way at this ideology together, of course, with economic realities and technological realities, we will be able to grasp what politically is necessary to give a framework to this technology, but also to the business models which go with this technology. A framework which makes this technology a technology which supports democracy, which supports fundamental rights, which supports people in their freedoms. Rather than doing the opposite, namely taking away freedom by manipulating and taking away democracy by basically supporting autocrats and authoritarian forms of thinking and authoritarian forms of government.
Justin Hendrix:
You say in the book, "The pace of authoritarian upheaval in the US is so breathtaking that it is necessary to look at the fundamentals of this transformation." Sometimes when I talk to folks in Europe, I get the sense that there's a little bit of an avoidance of this fact or a desire not to focus too much on it. Maybe it's unnecessary to focus on maybe the truth of it's so painful, but do you think it has sunk in in Europe really the extent of what's happened here in the United States? Do you think that, for instance, your former colleagues in the Commission or others in European politics have really accepted what has clearly happened here in the United States?
Paul Nemitz:
Well, I think for all of us in Europe who are political people, we all grew up with an America leading on democracy in the world. I mean, after all, it was America together with Britain who brought democracy back to Germany after fascism. I was a Fulbright Scholar, so with American scholarship going to America to learn a little bit more about the democratic American way of life, about the rule of law, studying law at George Washington. So for all of us who grew up in this transatlantic atmosphere, looking up to the history of democracy in the United States in particular and the great achievements of democracy in America, this development, of course, is a shock. It's a complete loss of orientation and the good culture of working together. And I think it is sinking in more and more, but my feeling is the worst is already behind us.
We have now seen that the interventions of the very aggressive interventions of Trump, and by the way, also already in the first mandate, Steve Bannon came to Italy and brought money with him and appeared here with the right-wing Prime Minister Meloni. Also Elon Musk did this. They intervened in the election. J.D. Vance went to Hungary. All this was not very successful. We now have two governments in Europe, which have come back to Democratic mainstream in Poland and in Hungary, namely. And in the United States, we see that Trump has very, very bad opinion polls now below 30%.
And I must say the recent paper of OpenAI on the industrial policy for the intelligence age, which mentions the word democracy quite often, in my view is a signal that a certain repositioning is taking place in the tech scene, namely they are preparing already for the life after Trump. I think the predictions in the tech scene are that this with the bad opinion polls, with the war and so on, this is not good for Americans. They don't like this. So this regime will not continue. And of course, if there is a return to normal rules of democratic government, you will see something in America, which is a little bit alike to the awakening and the adjustment of the whole political system, which Germany, for example, saw after the Nazi time. There was a lot of thinking in changing how politics is run, for example, how the media are organized and so on.
And my bet is that this will happen in America in the post-Trump time and people are starting to think about it. I think it's good that they're starting to think about it. And I would say it's worth the investment because certainly neither hopefully the majority of Americans, but certainly neither others in the rest of the world, and certainly many here in Europe, we wouldn't want to see America sinking again so deep as it has been sinking with Trump.
Justin Hendrix:
I want to come back to that point of conversation at some point a little later. In particular, whether you feel the impetus towards European tech sovereignty and some of the other movements towards protection from American tech or the dominance of American tech will persist after the Trump administration. If in fact, you're right that the kind of authoritarian slide that we're witnessing here is interrupted somehow, but I want to maybe just spend a moment or two on your general critique of artificial intelligence. I mean, you say that AI itself is a extremely problematic technology just fundamentally. How do you arrive at that conclusion? What are the basics for why you state that?
Paul Nemitz:
Well, I think what we are saying is that claiming that this technology has the human ability of intelligence and can replace humans in the form of general artificial intelligence in everything, namely also in good judgment. This is a claim which goes too far because after all, this technology is a technology of mathematics. It's a technology of algorithms. It's not a technology which is in any way able to even emulate what makes humans and what makes the good judgment of humans, the broader political understanding, the willingness and ability to change positions and to compromise. All our emotions, all our bodily functions, which are of course linked to how we are thinking, both our motivations, where do they come from? They come from dreams about the better, they come from discontent. All this AI doesn't have. So in the same way that mathematics doesn't explain which question the professor of mathematics is asking himself or herself for the next research project, these machines can't give the orientation in the world which humans need beyond rewriting what already exists.
I think in part, our book recalls or is an effort to recall that our abilities go far beyond just assembling all readings and all facts of this world and then writing them up nicely. And we believe it is important, especially for humans as participants as citizens, participants in democratic processes that we maintain the self-confidence that it is legitimate that we decide. We decide individually, we decide together. We are not failing creatures. We are not failing machines of information treatment compared to AI, but we have both legitimacy and abilities which go beyond that of AI and everything which anthropomorphizes AI or claims that AI will be able to take over from humans in all respects we believe that's misguided.
It will be a tool and we must keep it that way and it is good to keep it this way. Any claims which say, for example, Eric Schmidt, "We have to get used to the fact that AI is unexplainable and we have to live with it because if we ask for explanations, then the performance of AI will go down and people will die." I think that's what he said about AI in health. Well, our answer there is asking for reasons and asking for explanations was really enlightenment. It was ending the government by the church and by religion. It was a calling by Kant and other philosophers on humans to think themselves and to leave this state of just accepting things. And we don't want to see humans basically being dominated by a new religion of AI where we have to accept all the results AI produces.
I personally, I'm also very critical of this whole ideology of trust in AI. I would say rather the opposite, I would say in the same way that we have been teaching and we have been taught and we have been teaching our children to be critical towards power, to be critical towards media. We should teach them to be critical towards AI because AI is power and in part AI is taking over functions which media beforehand, namely information preparation and distribution. And so we have to be extremely critical and we have to learn new methods of understanding how to make sense of what AI gives us to make judgements, whether it's useful or not.
Is it really true? Is it balanced? Is it one-sided, politically one-sided? In other respects, one-sided, is it maybe discriminating between people of different color or different sex? All of these questions must remain very high on the agenda. So, I would say it is healthy to say people need to be critical towards power. They need to be critical towards technologies. That's what we have always said in democratic states and we should continue saying this also towards AI.
Justin Hendrix:
One of the other arguments this book is that the concentration of power we see in the tech industry is a result of what you call the systematic subsidy of under regulation over the last 30 years. You say that overregulation or even deregulation, I suppose, is on everyone's lips these days used to disparage law and justice, which are pillars of democracy. And I'm talking to you on a day where we've just seen the Commission put off its AI regulation. A lot of headlines here suggesting, "Well, that's got to be in part because of pressure from the United States." We've got the omnibus proposal and conversation going on as well.
As a person who of course helped build the GDPR, who's been studying the insides and I'm sure moving commas and phrases in a lot of these documents over the last few years, what do you make of this moment, particularly in Europe with regard to the orientation to regulation?
Paul Nemitz:
Well, people get the policy they voted for. The new European Parliament is much more right-wing than the previous Parliaments were in the past. When we adopted GDPR, for example, the leadership in the European Commission was with Viviane Reding, vice president. She was from Luxembourg. She was a conservative politician of the same type of Mrs. Angela Merkel, the German Chancellor, same party, named with the European People's Party in Germany, the CDU. This would be Republicans in America, let's say the pre MAGA and Trump Republicans, and she pushed forward the GDPR. So the idea that the GDPR is a creation of Green or left wing or something is a complete misunderstanding and it was pushed forward by a conservative vice president and it was adopted with a very big majority, including the conservatives, the EPP in the European Parliament together with the liberals, the social Democrats and the Greens.
Now the Parliament is much more right wing. I would say there are too many in European politics which have been impressed by Elon Musk's loud talk and the chainsaw in American administrations. The proposals which have been made are largely not based on empirics. They are not scientifically underpinned, but as also some high officials of the Commission have said, they are catering to a feeling, namely a feeling of overregulation and of course a feeling which is supported by the ideology which comes from MAGA and from American companies and which has always come from American companies. That's nothing new. And of course, also business people in Europe of a certain type and have this on their lips constantly.
In fact, I studied law in Hamburg, which is a business town. And in my first semester I learned whining is the greeting of the businessmen. And the whining which was meant at the time was, "Oh, the taxes are too high and the labor laws are too protective." So this view, this neoliberal view of how the world should be has always been around, but it has got some, let's say, push in the last European elections, it is pushed by America. And that has an influence on what the Commission is trying to do now in Europe.
But will they succeed with this? Will this bring growth? Will they actually get all this through European Parliament? We will have to see the function of the EPP as the right centralist party is key here because they must choose whether they stick with the Democratic parties, namely the social Democrats, the Greens and the liberals, and make the compromises which are necessary to have such a minority, or do they fall prey to the temptation to go with the extreme right wing? And this will be a test for this party and it will be also a test for the Commission, whether it goes along with this.
My hope is that the EPP has the responsibility and understands its responsibility for democracy and doesn't fall prey to this temptation. And so although these regulations are, they seem to be very technical, but that is the crux of the digital age, that much of the regulation, which sounds very technical, is actually very, very important for fundamental rights of people, for the freedom of people, and for how democracy functions. And I can just hope that in this very ideological drive of being catering to feelings of businessmen, European politics doesn't go too far to the right and is not too destructive of, I would say, the good things which we have built in the past and which created legal certainty and which are also part of the European identity.
Justin Hendrix:
I want to just drill in one small thing which you address in this same part of the book that focuses on these questions of regulation and law and power. You talk about the idea of evidence-based legislation and you write that the path to under-regulation has already been set with this demand for evidence-based legislation. I think about European digital regulation. I do think about this kind of notion that seems built into certain frameworks like the Digital Services Act that we're going to get the evidence, we're going to get the science, we're going to see if this rule is working or if it isn't, we're going to be able to empirically judge the impact of these policies. Why is that problematic in your view?
Paul Nemitz:
Well, it depends on what you consider evidence. I had recently a very good discussion about this with a number of very important scientists and there are two ways one can read evidence. One is to say best available knowledge, which includes scientific theory. The other reading is more narrow. Evidence means empirical evidence, things which can already observe. My argument is only if we make in a fast moving technological area, the rulemaking dependent on empirical evidence, the law regularly comes too late and we have to wait for the catastrophe. My argument is we should not have a narrow reading of evidence-based in the sense that must be empirical evidence, but we must allow ourselves to employ the human ability which AI doesn't have, namely to think ahead and to imagine what a technology could lead to, what the potentials of the technology could mean for democracy, for fundamental rights and freedoms of people.
And this thinking must be enough if there's a good reasoning to justify legislation, even if there is no empirical evidence yet that these things are happening. But if the potential is serious and can be described and derived with scientific arguments, I think that should be enough to make legislation which looks forward and which avoids the great catastrophe rather than having to wait first for the catastrophe and then regulate after. And the catastrophe which we are worried about is that democracy will be done away with. Because at this stage of technology and of this stage of total surveillance potentiality, once democracy is done away with, it's very difficult to come back. And I just hope that America still will make it.
As I said, we were all very, very happy to see Poland and Hungary come back. Let's hope America will be able also to come back. And for example, make Congress work again and make it possible again that Congress adopts laws on these very important and high risk technologies like AI. It would be much better for the world also for Europe, but also for the transatlantic relationship if America would take charge of regulating the huge economic and technological power it has rather than leaving it to Europeans.
Justin Hendrix:
I wonder if we are capable of that, or if we don't now see these technology firms, their size, their concentration as vehicles for empire.
Paul Nemitz:
Well, I must say I'm in many contexts still very connected to America both with my mind and my heart. And I remember, for example, Al Gore saying, "We don't want the stalker economy." That was about all the privacy of people being invaded by Facebook. I have also seen much scientific work in America from the academic world and from civil society, which is very critical of what's going on in America. For me, still the best source of criticism of American companies are American civil society organizations and American journalists, including also the specialized press, but also the broad sheets. The mass surveillance issue, the snooping on people, that is something Americans also genuinely don't like. In the opinion polls of people, "What do you think about the government or big corporates sucking up all your private data and making profiles about your life?" The American people are as critical as Europeans are.
I don't think that there is, let's say at the basis, a different attitude the same on economic power, America has a great history of antitrust and it has just got to remember this history and bring it back. And so I think we will watch and I have hopes for the self-healing power of American democracy that it will overcome these dark times. And then use again the instruments of an independent FTC not under the thumb of the president of a justice ministry which can really do its job of a Security and Exchange Commission, which is independent and so on.
So, I think the potential in America to rebalance this concentration of power, which is also not good for American business and American people is there. And well, it's your job to get democracy moving and to get enough people to engage not only in the next elections, but I would say also on a little bit more continuous basis to make American democracy thrive again.
Justin Hendrix:
Perhaps consistent with the optimism you just expressed, you write late in the book that resistance is stirring in many places, that you think that movements against authoritarian and intolerant politics and against manipulation, expropriation and disenfranchisement by the big tech companies, their business models and technologies is growing. You end this book with a call to action. What do you think tech policy press listeners should go and do? What do we all have to do to help constrain the power of this industry and to help usher in a future where artificial intelligence isn't such a threat to democracy?
Paul Nemitz:
Well, we are trying to say to people that there are really three levels on which they can engage. First is the micro level, your personal choices, which technology do you use for what purposes and which technology do you allow your children to use and so on.
And then there's a second level, which is your closer environment. What technology does your school use and what technology is used in your company? What technology do you participate in as a manager or a worker and are there choices possible and is there a debate possible? Can you convince people in your company to use a different type of technology, move to open source, move to the Fediverse, move to more transparent business models?
And then of course, the third level is the political level. Well, people can, of course, engage with political movements, political parties, they can engage in civil society. And some civil society in America is extremely strong and I would wish that this engagement, especially of those who know about this technology and who work with it, the people who have studied informatics, the people who have studied mathematics, physics, the engineers, that they understand that we are in a new age in the sense that they now carry a responsibility for fundamental rights of people and for democracy, which previously they didn't carry.
They always carried a responsibility for the safety in terms of avoiding physical damage in terms of avoiding an electric shock to a person. But now their responsibility goes much further. And I think this is something which really people need to take to heart when they work in a technological context that this technology now touches on much more than just previously the physics of things. It touches on how our society works. And therefore we need also the universities to take charge and to start educating engineers, which we call engineers for democracy. We need people who are educated on the one hand to understand the technological potentiality of what's coming, what can this technology do, what can AI do?
And on the other hand, they must have an understanding of society, of rights, of people, of how democracy works to understand the impact of the technology and its potential impact on society. And to be able to do impact assessments of technology so that we can together then orient technology towards sustaining democracy rather than undermining it. And sustaining fundamental rights and freedoms and liberties of people rather than taking them away through dirty engineering for persuasion, which makes people dependent and allows to manipulate people rather than making people thrive in their freedoms and their conscious choices.
Justin Hendrix:
Almost at the very end of this book, you say, "The future of democracy, like the future of freedom and self-determination, will be decided in the digital arena. It is by no means certain that democracy will survive AI, but perhaps it can also emerge stronger from this transformation. The choice will be made through digital policy." Sounds like a call to action for Tech Policy Press listeners, no doubt. On this Thursday morning that I'm speaking to you, are you optimistic? Are you optimistic that we'll be able to run this course?
Paul Nemitz:
Yes. I would say alone the fact that Tech Policy Press exists and is so successful is already a cause for optimism. I'm also the European honorary ambassador of the Boston Global Forum. I have very good discussions there, for example, with people from MIT and Harvard University. I've really met many people in America, all of which I call the good America. I think it is very strong. I think it is still there and there's ample opportunity to engage.
And to come back to the recent paper of OpenAI, but also for example, to mention Woodrow Hartzog and his colleague at Boston University, the paper on AI and democracy written by lawyers. I've seen many other papers by professors of informatics and so on. If you now search for our democracy, there is a positive trend in America to work on these subjects with optimism and with rigor for the future.
As I said initially, I read this paper of OpenAI on the industrial policy for the intelligence age as a repositioning a little bit like Mark Zuckerberg tried in 2019 with his article in The Washington Post where he called for regulation, seemingly giving up the drive against regulation and democratic regulation. I would hope that now this time we will not see again a falling back into the anti-legislation drive, which we saw with Facebook and Mark Zuckerberg. I would hope that this new work which is taking place in America and Europe on AI for democracy, on engineers for democracy, on AI which supports people's choices as free people and not as manipulated people, that this is becoming really stronger. And I think it is already a world trend.
And many Americans play an important role in this. And I would say not only congratulations and I pull off my hat, but I think you have a huge potential to come back in the age of artificial intelligence as lead nation on democracy in the age of artificial intelligence. And that's what I would wish for the post-Trump future of America.
Justin Hendrix:
Well, let's make a plan to come back and revisit this and check in and see how it's going at some interval in the future. Paul, I appreciate you speaking to me today. This book's called The Open Future and Its Enemies: How We Can Protect Free Society from AI Dictatorship. Paul Nemitz, Matthias Pfeffer and Jürgen Pfeffer. My best to them as well. Thank you so much for joining me.
Paul Nemitz:
Thank you, Justin.
Authors
