Justin Sherman is a senior fellow at Duke University’s Sanford School of Public Policy.
Late last year, the cryptocurrency exchange FTX completely imploded. The US government has charged FTX’s founder, Sam Bankman-Fried, with conspiracy to commit wire fraud, wire fraud, conspiracy to commit commodities fraud, conspiracy to commit securities fraud, conspiracy to commit money laundering, and conspiracy to defraud the Federal Electoral Commission and commit campaign finance violations—charges to which he pled not guilty last week.
But if you are just getting up to speed on the FTX scandal after the holidays—or even attempting to wrap your head around cryptocurrencies in general—the story is much larger.
Blockchain hype is everywhere, and it goes far beyond cryptocurrencies. Simply typing “blockchain” into Google yields countless articles, blogs, and think pieces, many heralding its apparent potential to change the world. Excited authors explain so-called decentralized finance, or DeFi, that aspires to eliminate the middlemen in routine financial transactions via the blockchain; or assert that “Web3,” an imagined, future version of the internet built on blockchains, will enhance freedom of speech and information access worldwide through decentralized internet architecture. The World Economic Forum’s annual event in Davos, Switzerland, a pinnacle gathering of feel-good, billionaire and corporatist “solutions” that often ignore root problems (including on technology), has even gotten into the blockchain craze.
But this kind of blockchain hype is what happens when you have no grasp of internet history.
Several decades ago, some of the internet’s most vocal champions made the mistake of assuming (or just choosing to believe) that technical disruption would mean a complete disruption of power—socially, politically, and economically. It was a technology-centric, privileged, and in many ways technologically deterministic worldview, one that reckoned the internet was born and existed separate from the entrenched power structures of the world. That was and is false; years of growing, increasingly public, internet-entangled problems have made that painstakingly clear.
While improving the internet and using new technologies for social good and to challenge existing hierarchies may be laudable goals, those that hype blockchains and cryptocurrencies must not repeat history’s mistakes. They must remember that a better internet requires looking beyond the code and focusing on the social, political, and economic power dynamics around us.
Going back years, perhaps the most famous articulation of the techno-utopian view is John Perry Barlow’s 1996 Declaration of the Independence of Cyberspace. It declared to “governments of the Industrial World” that “you have no sovereignty where we gather” and that “cyberspace does not lie within your borders.” It continued:
We are creating a world that all may enter without privilege or prejudice accorded by race, economic power, military force, or station of birth.
We are creating a world where anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity.
Your legal concepts of property, expression, identity, movement, and context do not apply to us. They are all based on matter, and there is no matter here.
While Barlow’s manifesto is probably the most famous and the most cited here, it hardly stands alone. As Fred Turner writes in From Counterculture to Cyberculture, in the 1960s, “young Americans encountered a cybernetic vision of the world, one in which material reality could be imagined as an information system. To a generation that had grown up in a world beset by massive armies and by the threat of nuclear holocaust, the cybernetic notion of the globe as a single, interlinked pattern of information was deeply comforting: in the invisible play of information, many thought they could see the possibility of global harmony.” Stewart Brand, editor of the Whole Earth Catalogue periodical, wrote of the “hacker ethic” in 1984 that “no other group I know of has set out to liberate a technology and succeeded.” He added, “in reorganizing the Information Age around the individual, via personal computers, the hackers may well have saved the American economy.” (That same year, Brand infamously stated that information simultaneously “wants to be expensive” and “wants to be free.”)
Opining in Wired magazine in 1995, the late Mark Poster proclaimed that the internet “allows individuals to define their own identities and change them at will.” Because “identities are mobile, dissent is encouraged, and ‘normal’ status markers are absent, it is a very different social ‘space’ from that of the public sphere.” Poster continued: “The internet threatens the government (unmonitorable conversations), mocks private property (the infinite reproducibility of information), and flaunts moral propriety (the dissemination of pornography).” Just one of many others like it, the article captured the technology-centric, privileged, and technologically deterministic view of the internet at hand—underpinned by a fundamental belief that the invention of the web would completely disrupt societal power structures.
Unlike today’s tech workers and entrepreneurs, the readers of early Wired were direct heirs of the sixties’ countercultural ethos (not coincidentally, the same spirit that informed the rise of personal computing in the seventies). On the Venn diagram between monied techno-utopianism and hippie idealism, Wired’s target reader sat squarely in the center. … In early Wired, technology wasn’t just entertaining; it was a tool, meant to liberate and enlighten.
Examples abound. After all: ‘on the internet, nobody knows you’re a dog,’ jokes the infamous New Yorker cartoon from 1993.
Clearly, this general set of ideas was and is wrong. The notion that technical changes will immediately collapse or create an independent world from social, economic, and political power structures is debunked by everything from the market dominance of a few cloud providers and social media platforms, to governments controlling the internet architecture within their borders and using state violence to shape online activity, to rampant hate speech online—and the simple fact that a white man’s experience with harassment on Twitter is almost always very different (or nonexistent) compared to that of a Black woman.
These beliefs were problematic enough already. They were widespread in Silicon Valley and ignored many already emerging issues with the internet, such as the risks of allowing people to communicate online with few robust, built-in privacy and security features. Making things worse, these visions of cyber utopia, largely promoted by white American men, found their way into policymaking, touching everything from the US’ so-called internet freedom agenda, which lasted at least until the early 2010s, to policymakers’ slowness in recognizing the importance of privacy, cybersecurity, hate speech, harassment, market concentration, and other issues with and surrounding the internet. Clearly, the perspective in Congress, in many parts of the US media, and elsewhere has changed, but it took a while—including the hard work of many committed activists, researchers, and policy advocates, as well as high-profile scandals that could not be ignored.
Those that hype blockchain and “Web3” are following in the footsteps of early internet evangelists, and they are doing so even with the added benefit of hindsight. Undoubtedly, it sounds nice: just fix up the underlying architecture, and the internet and the world will be better. Data won’t be stolen and monetized. People can freely access information without censorship and corporate pressures. The dreams go on.
Among many factors at play in the FTX scandal, including Bankman-Fried’s allegedly criminal behavior and continued “founder hype” in tech, was his vision of cryptocurrencies as a means to social disruption. The story is repeating itself: a collection of privileged technologists are promoting a vision of the world in which blockchain and “Web3” revolutions will enable individuals to avoid censorship, break free of institutionalized financial systems, and live beyond the surveillance state. This, once again, repeats the problem of ignoring material realities and the social, political, and economic power structures around us. Even if a blockchain-based internet did not lend itself to today’s economic model of data collection, targeting, and brokerage—and that’s a big if—the internet is not the end-all, be-all of surveillance; preventing data collection while online does not mean people are free from surveillance on the streets of their cities, in their workplaces, or in their communities. It’s worth quoting Ruha Benjamin, who says incisively in Race After Technology (emphasis in the original):
Many tech enthusiasts wax poetic about a posthuman world and, indeed, the expansion of big data analytics, predictive algorithms, and AI, animate digital dreams of living beyond the human mind and body – even beyond human bias and racism. But posthumanist visions assume that we have all had a chance to be human.
None of this is to say technology cannot fundamentally impact social, economic, and political power structures; clearly, I am not saying that, either. (I write regularly about the intersections of technology and geopolitics as well as corporate and state surveillance, among other topics.) Furthermore, it is a laudable goal to push for using technology for socially good purposes, to improve people’s lives, and to hopefully contribute to making society a more just place.
Assuming technological disruption is the primary answer to societal injustice, though, is an assumption that many champions of the internet made before. To avoid repeating history’s mistakes—and to avoid ignoring the many problems with blockchains and cryptocurrencies and “Web3” ideas that span privacy, cybersecurity, financial regulation, and more—those that hype these technologies need to learn these lessons of the past.