Home

Donate

Unmasking Big Tech’s AI Policy Playbook: A Warning to Global South Policymakers

Michael L. Bąk / Oct 22, 2024

The tech billionaires just won’t stop. These self-proclaimed oracles of our AI future continue to breathe life into that familiar bogeyman: regulation. Their warning to policymakers (landing particularly well in countries in the Global South dependent on big tech to drive digital growth): embrace regulation, and you’ll stifle homegrown innovation, relegating your country to a perpetual state of FOMO. Some would-be emperors in Silicon Valley pair this bogeyman with a clever manipulation of the term open-source to mislead, confuse, and divert policymakers’ attention. Together, the bogeyman and the misleading use of open source obscure the real stakes at play, especially for the Global South: consolidation of power, anti-competitiveness, dependency, and extractive profiteering at the expense of people, cultures, and communities.

The False Choice: Open vs. Closed AI

Recently, Meta’s Mark Zuckerberg took to the pages of The Economist to promote a kinder, gentler regulatory environment for his distorted version of open-source AI. He chastised European regulators for risking their chance at a once-in-a-generation AI opportunity because of the EU’s "overly complex" regulations. In other words, complex rules for a complex technology are a threat to open technologies and innovation. Never mind the technology, underlying business models, and corporate behaviors will continue to create long-lasting, disruptive impacts on our societies, economies and political systems – and lives.

But rather than a genuine, inclusive discussion about how governments should approach AI governance, what we are witnessing instead is a clash of seemingly competing narratives swirling together to obfuscate the real aspirations of big tech. The advocates of open-source large language models (LLMs) present themselves as civic-minded, democratic, and responsible, while closed-source proponents position themselves as the responsible stewards of secure, walled-garden AI development. Both sides dress their arguments with warnings about dire consequences if their views aren’t adopted by policymakers.

But here’s the truth about big tech and AI: it doesn’t matter. The open-closed source debate is a short-run distraction. Regardless of which approach prevails or dominates, for big tech, the destination is the same: accumulation of power, anti-competitive behavior, and unchecked dominance by a handful of tech behemoths.

The Open-Source Charade

Meta’s release of LLaMA into the wild as “open source,” likely the world’s largest and most expensive LLM to date, is a masterclass in deceptive positioning. Consider that Meta’s 2024 AI spend is in the range of the cost of the Manhattan Project that ended WW2, around $30 to $40 billion, adjusted for inflation. For these sums, there is no free ride here, despite framing its release as an act of generosity and positioning itself as a force for the public good. But let’s not be fooled.

Meta’s so-called embrace of open-source AI is a hijacking of the term. Despite the shiny rhetoric, LLaMA isn’t truly open-source. Meta has strategically walled off key components, including the training data, and restricted who can use it (in other words, real open source licenses do not have exclusions like “can’t use this if you have more than X users”). It is effectively preserving its market dominance while aiming for the reputational benefits that come with the enchanting "open-source" label.

Here’s the kicker, though. Even if Meta were to make LLaMA fully open, the beneficial impact on the broader ecosystem would be negligible. Why is that? Because only Meta can keep the stream or LLaMA models going because only they have the data and scale of compute infrastructure required to train and deploy such large-scale language models. They are ensuring they own the “standard.”

Meta's compute power — its vast infrastructure of GPUs — dwarfs the combined resources of governments, research centers, and non-big tech private sector actors globally. Even with access to the data, independent researchers would find it nearly impossible to replicate the models. The cost of compute is simply too high. In other words, the so-called openness doesn’t serve the public good — it entrenches Meta’s dominance.

While it is true that an action such as a government sanction could not easily take away LLaMA models already downloaded, compared to cloud AI models that could be cut off, the value of this take-home benefit is vastly overstated because the models lose value fast as they age. So, in reality, should Meta stop producing LLaMAs, the outcome is similar in dependency than in the cloud model case – just time delayed.

And what happens when Meta has put everyone out of business? (Remember, companies can’t continue training models for hundreds of millions of dollars only to get the price dumped to zero – except, of course, those like Meta with massive, massive ad tech profits underwriting them). As always, with big tech, the first hit is free.

By leveraging the feel-good fuzziness of open-source technology for this first hit, Meta is ‘open washing’— portraying their LLM model as more open and transparent than it truly is. By branding LLaMA as part of the civic-minded, collaborative spirit of open-source, they are able to aggressively scale, maximize profits, stifle competition, and co-opt free R&D from the commons.

As my colleague, Georg Zoeller, at the Centre for AI Leadership in Singapore recently explained:

“By open sourcing the basic tools for AI, hyperscalers have enabled startups to build products in the field and used it to replace internal teams as the primary source of product R&D. In the wide open field of possibilities for AI use cases, it represents a carpet bombing approach to innovation - give away the basic tools for free and let startups do the R&D at marginal cost to the hyperscaler … [Then] the hyperscaler can identify exactly when to swoop in and pick up the validated growth opportunities at no risk, all while getting positive PR for association with “open source”, using acqui-hire as a regulatory foil.”

All of this happens in plain sight while possibly securing less rigorous regulatory measures designed for genuine open-source projects, allowing for big Global North companies to further consolidate their power. Most of all, and most worrying, it accelerates AI, scaling both adoption and research to warp speed, beyond the reaction time of society and policymakers – usually with the arms race metaphor (China) as a justification sweetener should policymakers and thought leaders think too critically about the accumulation of unchecked corporate power.

So why doesn’t Meta employ a different nomenclature for its semi-open LLM? They don’t because “open source” sounds more democratic, more generous, more, well, open. It’s more enchanting.

And that enchantment is being leveraged as a strategy to distract policymakers, creating the illusion of transparency and democratization while creating a false binary choice, tying up valuable thinkers and bureaucratic bandwidth in time-consuming debates and discussions. The more time spent debating open vs. closed models, the less attention is given to the real issues: data governance, anti-competitive behavior, accumulating power, platform liability, tax revenue, and the broader societal impacts of AI. Societies and policymakers can’t keep up. By the time they realize the debate was a distraction, it will be too late. The power will have already shifted significantly into the hands of a few Global North companies. Meanwhile, the real goal remains the same: maximizing profits, stifling competition, and accumulating power.

The Regulatory Bogeyman

Also at the heart of big tech’s strategy, lies the familiar bogeyman: regulation is the stifler of innovation. For years, tech giants have employed scare tactics to convince policymakers that any regulation will stifle innovation, lead to economic decline, and exclude countries from the prestigious digital vanguard. These dire warnings are frequently targeted, especially in the Global South, where policymakers often lack the resources and expertise to keep pace with rapid technological advancements, including AI.

Big tech’s polished lobbyists offer what seems like a reasonable solution, workable regulation" — which translates to delayed, light-touch, or self-regulation of emerging technologies. They trade built-up “policy capital” (i.e., political relationships, domestic investments, training programs, education initiatives, headquarter visits, sponsorships) for “policy wins”, increasing their power across the entire policymaking lifecycle. But make no mistake: this "workable regulation" is nothing more than doublespeak, designed to protect hyperscaling ambitions and solidify market dominance. And allow corporate leaders to claim (wink, wink) that they wholeheartedly support regulation in the public interest.

The New Colonization: Power Without Borders

This isn’t just about market dominance or profits. It’s about power. The Global South, in particular, stands to lose the most in this new era of digital colonization. Just as colonial powers once plundered resources and imposed foreign rule, today’s tech giants are extracting valuable data, shaping policy discourse, and eroding national sovereignty — all under the guise of an inevitable trajectory of innovation.

Policymakers in the Global South face an uphill battle. Their parliaments are often underfunded, ministries understaffed, and technical expertise limited. Meanwhile, big tech continues to outpace them, setting the terms of engagement and shaping the global future of AI on their terms. Without swift and decisive action (which big tech’s tactics work to prevent), the Global South will continue to suffer digital colonization, forced to rely on foreign technologies that do not necessarily align with their societal, cultural, or economic needs.

This ambitious desire to direct their inevitable arc of technological innovation – lest society lose out – was on full display recently. Shockingly, Meta’s Mark Zuckerberg took to the stage at Meta Connect 2024 sporting a bespoke oversized t-shirt with “Aut Zuck, Aut Nihil” displayed across his chest, an homage to “Aut Caesar, Aut Nihil” – “Either Caesar/Emperor or Nothing.” This phrase has historic connotations associated with a ruthless, uncompromising pursuit of power where the only acceptable outcome is immense authority or remaining insignificant.

This sense of dominating the trajectory of technological innovation, despite what elected and civic leaders may caution, remains pervasive across big tech. In the race to control AI development, big tech companies are not satisfied with contributing to a diverse, balanced ecosystem. Instead, they wield their enormous power to drive policy narratives and bend the direction of the technology itself, crushing competition and ensuring that they call the shots.

This all-or-nothing mindset drives their tactics, from lobbying for so-called “workable regulation” to exploiting the idea of open-source to reinforce a new form of digital colonization. All with dire pronouncements meant to instill FOMO amongst scared and unprepared decision-makers.

The great black lesbian feminist Audre Lorde declared in a speech at an NYU conference in 1979 that "the master's tools will never dismantle the master's house." Though her powerful phrase came in the context of a speech exposing underlying issues of racism within feminism, it also captures the illusion that tech giants create when they frame their tools — such as so-called "open-source" AI models — as mechanisms for democratizing technology for everyone. The reality is that these tools are designed and released within a system that serves big tech’s own interests while suppressing potential resistance: maximizing profits, consolidating power, and reinforcing their grip on data, ideas, and resources from the Global South.

Aut Zuck, Aut Nihil, indeed.

The very technology big tech claims will empower others is built within a framework that ensures their continued dominance. Relying on these tools within regulatory regimes that its lobbying dilutes and shapes, only perpetuates a relationship of dependency and exploitation. Empowerment of Global South policymakers and their societies will not come from blindly adopting big tech policy frameworks but from developing tech policies that locally prioritize the needs of their own people, societies, and entrepreneurs so that the arc of technology bends in society’s desired direction. As this year’s Nobel Laureates in Economics, Daren Acemoglu and Simon Johnson, remind us, we can make social, political, and economic decisions to shape this arc because “the power to persuade is no more preordained than is history; we can also refashion whose opinions are valued and listened to and who sets the agenda.”

Taking Power Back: Regulating Behaviors and Models

So, what’s the solution? Intentional, meaningful regulation. Policymakers must see through the doomsday scenarios around regulation and the open source sleight of hand, recognizing them for what they are: tools of manipulation designed to protect the interests of billion and trillion-dollar companies. To effectively retrieve power lost to big tech and direct technological innovation in the ways we want, the most urgent question is not whether big AI models should be open or closed (Zuckerberg made that decision already), but how AI can serve the public good, protect citizens, and strengthen democracy.

We absolutely need thoughtful regulation that considers the broader societal, economic, and governance implications of frontier tech, including LLMs, and targets the business models and behaviors, not just the technology itself. This means regulating the surveillance ad tech that subsidizes those LLMs that cost hundreds of millions of dollars to develop that are ostensibly given away for free. It also means holding companies liable for their products when they harm the public, rejecting the endless apologies and re-iterations after the damage is done (and profit made), and preventing big firms from acquihire practices that inhibit competitiveness and the rise of competitive unicorns in the rest of the world. Finally, it means Global South policymakers and thinkers demand and take seats at the table where AI agendas are set and priorities made.

For this to happen, policymakers need to become AI-literate, capable of understanding not just the technology but the business models and behaviors that drive and subsidize it, often surveillance ad tech. It requires collaboration across borders and sectors, with government agencies, civil society, and academia working together to ensure that the private sector’s AI innovations benefit people, not just create profits for a few.

Outsourcing policy stewardship to tech billionaires, their well-funded government relations teams, or tech-funded think tanks and trade groups is a dangerous path. We need enlightened, informed policy leaders and unbiased experts who understand that AI technology and business models must be treated and properly managed as a public good. The relentless pace of hyperspeed and hyperscaling may indeed need to be tempered.

Policymakers, particularly in the Global South, must reclaim agency and steer AI development toward equitable, inclusive outcomes. Only through informed, thoughtful regulation that is not distracted by manufactured choices and scaremongering can we roll back this new form of digital colonization and ensure that technology prioritizes serving humanity equitably around the world first, before shareholders and profits.

Authors

Michael L. Bąk
Michael L. Bąk is a Non-Resident Visiting Senior Fellow at the NYU Center for Global Affairs focusing on cyber policy and a recognized specialist in democratic governance, public policy, civil society, human rights, and ethical tech policy. He also serves on the Board of Advisors for the Centre for ...

Topics