Home

Three Priorities to Rein in Big Tech in Times of Election Denialism

Karina Montoya / Nov 29, 2022

Karina Montoya is a reporter and researcher for the Center of Journalism and Liberty. She has a background in business, finance, and technology reporting for U.S. and South American media.

This essay is part of a series on Race, Ethnicity, Technology and Elections supported by the Jones Family Foundation. Views expressed here are those of the authors.

Apr 3, 2021: Art Deco facade of the Federal Trade Commission Building in Washington, DC. Shutterstock

Americans share the view that something is seriously wrong with the way big technology platforms intermediate social communications. By 2021, 8 in 10 Americans believed that large social media platforms helped spread disinformation more than reliable news. The amplification of online disinformation — a catchall term used here to refer to false or misleading material used to influence behavior — has indeed become a monumental problem. The spread of the “Big Lie,” the unsubstantiated claim that President Joe Biden was not the legitimate winner in the 2020 presidential elections, has come to represent the extreme nature of this problem.

More specific concerns vary across political aisles. Conservatives call out “fake news” and decry censorship of their views on social media, so many want to strip these platforms of their power to moderate content. For many on the right, the solution is to substantially reform or repeal Section 230 of the 1996 Communications Decency Act, which allows platforms to curate content while protecting them from liability over the vast majority of the speech they host. Liberals see outright lies being propagated over social media and believe platforms are not doing enough to remove them, so they defend the ability of platforms to develop content moderation tools like fact-checking and labeling, the suspension or removal of accounts, and the exercise of more oversight of political ads.

Fortunately, a way exists to address these concerns while also helping to deal with many other problems with today’s informational environment. But to get there we need to broaden the conversation and consider how a combination of three different policy levers can be used to that end. Specifically, we need to look at how a combination of competition policy, data privacy rights, and transparency requirements for platforms can be made to work together toward meaningful reform. By prioritizing efforts on these three fronts, we can not only go a long way toward solving the problems of disinformation, which peaks in election seasons, but also ameliorate other dangerous knock-on effects threatening democracy, such as the eroding economic foundations of independent journalism.

These policy fronts also present an opportunity to tackle how Big Tech’s operations exacerbate harm to communities of color and other vulnerable groups, such as non-English speaking people. The lack of antitrust enforcement, data privacy protections, and platform transparency obligations in the United States affects these communities in multiple ways: as entrepreneurs, they are virtually unable to challenge technology incumbents on a level playing field; as consumers, they are exposed to harms such as unlawful discrimination in digital advertising; and as voters, they are targeted with disinformation and manipulation by politicians and campaigns.

1. Competition Policy and Antitrust Enforcement

Competition policy can be enforced in three major areas. First, antitrust enforcement can prevent mergers likely to lessen competition, such as deals between rivals in the same market (horizontal mergers), and between companies with buyer-seller relationships in the same supply chain (or vertical mergers). Second, antitrust enforcement can help prevent business practices that threaten competition or that entrench the market power of big firms. Enforcement requires empowering federal agencies, such as the Federal Trade Commission (FTC) and Department of Justice (DOJ), to aggressively prosecute violations of antitrust law, including pursuing the breakup of dominant corporations.

Under the competition policy that was in effect up to the 1960s, today’s Big Tech corporations would have faced antitrust suits by federal enforcers and other private parties. At that time, the understanding of antitrust law — grounded in the Sherman Act in 1890 — upheld the idea that market concentration in the hands of a few players is likely to weaken competition and bring harmful consequences for workers, small businesses, and consumers. Policymakers and courts were wary of mergers between large firms seeking efficiencies, since their control of large swaths of the market could lead to price hikes or present an insurmountable obstacle for new entrants.

But beginning in the 1970s, the interpretation of antitrust law broke with the past. Conservative legal scholar Robert Bork argued in his book The Antitrust Paradox that antitrust regulation existed to maximize consumer welfare, and that promoting efficiency was the best way to achieve that. Using this lens, policymakers became less concerned with the potential for mergers to harm innovation or small businesses, and courts began to gauge potential harms as those felt by the consumer in the form of price increases. About fifty years later, and within six months of taking office, President Biden issued an executive order seeking to reinstate the historical reading of antitrust laws in order to foster healthier competition.

Antitrust laws regulate many types of business practices and market structures potentially harmful to the American economy. A subset of them govern the need to break up existing monopolies and prevent mergers that lead to, create, or maintain, outsized corporate power: the 1890 Sherman Act, which prohibits any monopolization or attempt to monopolize a market, and the 1914 Clayton Act, which bans mergers and acquisitions likely to lessen competition or that can create a monopoly, as described by the FTC. The FTC and the DOJ Antitrust Division can bring federal antitrust lawsuits, as can state attorneys general, in addition to enforcing their own state antitrust laws. Courts ultimately decide how to apply antitrust law, on a case-by-case basis.

In the last two years, several antitrust cases have been brought to court. The DOJ Antitrust Division launched a probe over Google’s monopolization of the online search market. The FTC brought a complaint that Facebook’s acquisition of Instagram and WhatsApp was anticompetitive, aimed at killing nascent competition. Both cases are moving forward in federal courts. At the state level, a coalition of 17 states and territories led by the Texas Attorney General sued Google over monopolization of the digital advertising market. The case is also making progress in a New York district court.

The digital advertising market makes a good case for the dangers of unchecked mergers. Most of the digital advertising that happens outside social media is placed through programmatic exchanges. A very simplified version of how this market works is to imagine three ad tech products: one that serves publishers, another for advertisers, and a third one, ad exchanges that connect the previous two. The price for an ad is set through real-time bidding: publishers offer their ad inventory and advertisers bid for it. Ad tech that serves publishers pools them all together according to the demand for certain audiences, and the ad exchange picks the winning bid in a split second. As incredibly efficient as it sounds, it’s also a very opaque system rife with fraud.

The largest ad tech companies in all these segments belong to either Google or Facebook. Google, more specifically, is under scrutiny by the Texas AG for allegedly using its dominance in the ad exchanges market to coerce publishers to manage its inventory with DoubleClick, an ad tech company Google acquired in 2007. But Big Tech routinely downplays the conflicts of interest it creates by operating in multiple segments of the same supply chain. Last year, during the debate on the Digital Markets Act, experts in the European Union called this dynamic out. “Google is present at several layers of the [ad tech chain], where it has large market shares at each stage. The advertiser receives hardly any information on readers of the ad. […] This opacity is almost by design, and could be in itself a manifestation of abuse of market power,” a European Commission report reads.

Another way to enforce antitrust laws is to target practices that entrench the power of dominant market players. One such practice is self-preferencing. This happens when a firm “unfairly modifies its operations to privilege its own products or services,” which would violate Section 2 of the Sherman Act, writes Daniel Hanley, policy analyst at Open Markets Institute. In the EU, Google faced a heavy fine for using its power over online search to favor a shopping service it had in 2007. In the U.S., Amazon uses the same tactics to favor its own brands above those it competes against, reports The Markup.

This is exactly the behavior that the American Innovation and Choice Online Act seeks to correct. Even though the bill has bipartisan support, and was approved months ago for a full Senate vote, Sen. Chuck Schumer (D-NY) failed to bring it to the floor prior to the November midterm elections. Its approval is so critical that the White House plans to push for its passage before Republican lawmakers — many of whom are ready to oppose sound antitrust enforcement — shift Congressional priorities in January.

Market concentration stifles innovation and shuts out competitors, all of which disproportionately affects entrepreneurs, including those from communities of color, as they already face severe challenges raising capital and accessing credit. Organizations that would normally fall outside the scope of pushing for antitrust enforcement recognize the impacts of this policy lever. Indeed, racial justice organizations such as Color of Change are supportive of it. In its recently released Black Tech Agenda, antitrust enforcement is one of the organization’s six priorities to advance racial equity in the technology industry. There are indications that the FTC, under the leadership of Lina Khan, and the DOJ Antitrust Division, led by Jonathan Kanter, are supportive of this approach as well.

2. Data Privacy Rights

As the internet progressively transformed from a government-funded experiment into a privatized network, conflicts between online businesses and advocates for data privacy rights grew and persisted. Regulating data privacy can be as impactful as antitrust enforcement, as it can change the balance of power between platforms and users about how personal data is obtained and used. Thus, this policy front can significantly undermine Big Tech’s ability to overtly surveil users online, a business practice that facilitates voter suppression campaigns, hurts independent journalism, and downgrades online safety.

Data privacy rights protect users of digital services when private actors access their information (however it may be defined) in a variety of contexts. Given the expansion of the internet, this regulation is focused on how data is collected, stored, processed, and for what purposes. The United States has volumes of privacy laws, many of which pre-date the internet, and they were mainly to protect privacy in sector-specific contexts, for example, in health care (the Health Insurance Portability and Accountability Act, HIPAA) or education (Family Educational Rights and Privacy Act, Children’s Online Privacy Protection Act). The U.S. does not have a comprehensive data privacy law yet. But it does have a de facto regulator for such matters in the FTC. Similar to its role in antitrust enforcement, the FTC has authority to establish new rules for how businesses collect and use personal data.

In 2016, the European Union passed a seminal law regarding data privacy rights called the General Data Protection Regulation (GDPR), which went into effect in 2018. The GDPR seeks to enhance users’ control over the privacy of their online data, and it became highly influential globally. Among its seven principles, four — data minimization, purpose limitation, security, and accountability — have been adopted by various countries. Based on such principles, the right to opt out of data collection for advertising purposes, and the duty of companies to protect such data from unauthorized use, have also been adopted in recent American state laws, such as the 2018 California Consumer Privacy Act (CCPA).

The global push for data privacy protections also responds to the risks that ad tech poses to users’ safety. As mentioned earlier, ads are placed through programmatic exchanges. This system allows Google, Facebook, and many others in ad tech, to follow users across the web and capture their location, content consumed, devices used, among other data, to feed audience profiles. On top of that, technology giants can leverage data they collect from their own web properties, and harvest more detailed profiles that include race, employment status, political views, health conditions, etc. This business model, called surveillance advertising, “relies on persistent and invasive data collection used to try to predict and influence people’s behaviors and attitudes,” writes media scholar Matthew Crain.

When the Cambridge Analytica scandal broke in 2018, it demonstrated just how much targeting tools based on surveillance advertising can be exploited, especially to feed disinformation toward communities of color. In 2016, one tranche of former president Donald Trump’s campaign ads — run by Cambridge Analytica — were described by his team as “voter suppression operations” targeting Black Americans with disinformation to dissuade them from voting for Hillary Clinton. The implementation of the GDPR was due to start later in 2018, little after those events, and it could have prevented them and other forms of data exploitation. But moving forward, due to weak enforcement, Big Tech was able to work around it by forcing users to accept surveillance of their online activities in exchange for access to their services.

Tech firms’ ability to surveil people’s online activities and the opacity of the digital advertising market also undermines the ability of news media firms to produce journalism sustainably on a leveled playing field. It is untenable for news organizations to try to “compete” under this system — amassing web traffic to get “picked” for a bid and fill in an ad space. Furthermore, most of the advertising budgets do not go to the publishers, but the ad tech complex dominated by Google and Facebook. Capturing swaths of personal information that individuals would not easily give up undermines people’s right to privacy, but Big Tech has normalized this surveillance practice into an unassailable business model. Progressively, this has drawn the attention of the FTC, which is currently preparing new rules to stop the harms of commercial surveillance.

Despite criticism about the faulty enforcement of the GDPR in the EU, lawmakers in the U.S. quickly sought to follow similar standards and establish clearer enforcement. The CCPA, for example, gave users the right to opt out of the sale of their data for advertising purposes. Initially, though, the law’s wording left out many Big Tech corporations because they did not sell but share personal data. Thus, as sharing was not banned, they continued the practice. The law was eventually amended to cover data sharing and selling, so platforms now offer a “Do Not Sell My Data” option that covers both actions. Later in 2020, California passed a new law to strengthen the CCPA with new restrictions to collect and use, for example, a person’s race or exact location. In 2023, the California attorney general, and the newly created California Data Protection Agency, will enforce this law.

Congress has followed suit with a bipartisan bill, the American Data Privacy and Protection Act (ADPPA). Approved with an almost unanimous bipartisan majority by the House Committee on Energy and Commerce in July, the ADPPA provides a clear list of permitted uses of personal data. Although it contains strict language to limit data collection for advertising purposes, it arguably does not go far enough to ban surveillance advertising. But it bans the use of sensitive data for advertising — which would protect information such as health conditions, sexual orientation, immigration status and precise location — unless users opt into it. Currently, this is the most complete data privacy legislation proposed by Congress.

Whether through Congress or federal agencies, the U.S. is likely on a path towards new federal standards for data privacy rights. How they are designed and implemented can significantly curtail Big Tech’s dominance based on online surveillance, in combination with antitrust enforcement. But neither one is a substitute for the other. The intersection of this policy front with antitrust is still the subject of scholarly discussions, such as in the works of experts such as Frank Pasquale and his exploration of privacy, antitrust and power; or Dina Srinivasan and her examination of competition and privacy in social media. There is an opportunity for policymakers to incorporate such discussions into further legislative or administrative actions, and to apply a racial equity lens, similarly as in antitrust enforcement.

3. Platform Transparency and Algorithmic Accountability

All large social media platforms that curate content are built on algorithms that pursue user engagement. This logic applies to both paid and non-paid content placed on users’ timelines. The engagement pattern users exhibit, when aggregated, results in new data that informs the platforms’ ranking systems on how to continue curating content. It’s an automated process, and little is known about how these automated decisions are made. Therefore, scholars have a great interest in understanding, for example, how political ads are targeted, how targeting choices influence ranking systems, and what exactly platforms are doing — or not — to prevent harmful effects, such as amplifying disinformation.

That quest has been met with obstacles. In 2021, Facebook shut down a study — the Ad Observatory Project run by a team of researchers from New York University — that examined ad targeting on the platform, revealing discrimination toward people of color by Facebook’s advertising systems. Facebook asserted that the browser plug-in used to collect data from willing participants in the Ad Observatory project posed privacy risks and involved automated data scraping, which would violate its terms of service, an argument that failed to convince independent experts. Immediately, the move from Facebook reignited calls from researchers for legislation to allow access to platforms’ algorithms to determine their impact on society.

Today, researchers recognize Big Tech’s business interests conflict with the need for public oversight over social media, and are looking to remedy that situation. For Laura Edelson, co-creator of the plug-in for NYU’s Ad Observatory, it is time to accept that “voluntary transparency regimes have not succeeded,” and federal legislation is needed. Rebekah Tromble, director of the Institute for Data, Democracy & Politics at George Washington University, agrees with Edelson. “It is essential that Big Tech companies no longer have the power to determine who has access [to data for research] and under what conditions,” Tromble said last year.

One main issue at stake is how to open the black box of how social media’s ranking systems work. There is a general understanding that through machine learning models, ranking systems predict the probability of various outcomes, for example, whether a user would click ‘like’ on a post or reshare it, and whether the content is harmful — based on the platform’s policies. These probabilistic models surface the most engaging content, but also the content that tends to be the most harmful. Mark Zuckerberg once described this as a “natural pattern” in social media content. In the view of large social media platforms, they are fighting consequences they either did not foresee or did not hope to provoke.

But evidence has emerged that Big Tech firms have more knowledge than their leaders publicly admit about the harms their platforms inflict on society, and that they fail to disclose such harms and provide timely redress. Reports have shown, for example, that in advertising, Facebook’s algorithms discriminate against people of color in detrimental and unlawful ways that strip them of employment or housing opportunities, and that the company spares certain influential users from sanctions when they abuse its platform. Without whistleblowers and significant efforts by researchers and journalists who already find it difficult to access Big Tech’s data, we would not even be aware of these problems.

During election seasons, an acute problem in social media related to ranking systems and how they affect vulnerable groups is the amplification of disinformation in languages other than English. It is already hard to find voting information in non-English languages, and such a void is filled by voters with what they find, for example, on WhatsApp, which has massive reach among U.S. Latino users. Amidst the myriad of moderation policies Big Tech activates during elections, what matters is how automated systems apply them. Currently, it is impossible to know how many resources are dedicated to or how ranking systems work with languages other than English in the U.S., let alone in other countries.

To enable public oversight that allows these and other findings to be disclosed more readily, researchers call for policies mandating platform transparency and algorithmic accountability. Platform transparency policies, for example, could authorize Big Tech corporations to give access to researchers to anonymized data for public-interest studies, or alert regulators when they learn their algorithms start causing harm. Algorithmic accountability policies would charge platforms with carrying out independent audits of their algorithms to prevent racial discrimination, for example, and they would be penalized if problems persist.

In 2021, at least four bills were introduced in Congress that propose some version of these measures. One such bill, the Platform Accountability and Transparency Act (PATA) reemerged in a recent Senate Homeland Security Committee hearing as a potentially adequate measure. This bipartisan bill would compel platforms to grant researchers access to anonymized data through a clearing process overseen by the FTC. If platforms fail to comply, they would lose their liability protection for hosting third-party speech provided by Section 230. But given the number of bills proposing platform transparency rules, experts such as Edelson have pointed out the need to identify unifying principles across these proposals and the urgency to move forward with a more thorough platform transparency reform.

Another policy tool to learn from is the EU’s Digital Services Act (DSA), which tackles platform accountability and speech moderation on the continent. For example, EU’s Article 40 compels platforms to provide access to certain data to researchers vetted by EU Commission-appointed officials. The DSA also mandates large platforms to conduct risk assessments to their algorithms to investigate how illegal content, as defined by EU law, spreads. The White House has reportedly voiced support to create a voluntary mandate in the U.S. that would reflect some of the DSA proposals, but whether that interest will prevail remains to be seen sometime early next year, when the Trade and Tech Council meeting between U.S. and European leaders will take place.

As large social media platforms increasingly act as essential infrastructures for social communications, public oversight can be facilitated by platform transparency and algorithmic accountability rules. These become urgent in light of how technology corporations are prone to use other concerns, such as data privacy, as an excuse to shield themselves from transparency requests. In the case of the Ad Observatory project, for example, Facebook argued that it shut the project down to comply with a privacy decree made by the FTC when the Cambridge Analytica scandal broke out. Eventually, the FTC called on the corporation to correct that record.

Conclusion

Following the emergence of the internet, the outsized power of a handful of corporations now shapes how Americans perform their social interactions. YouTube, Facebook, and Instagram lead on measures of Americans reporting social media use. Google has a worldwide market share of 93 percent in search; together with Facebook, they practically hold a duopoly in the digital advertising industry. Market concentration, online surveillance, and the lack of platform transparency obligations are fundamental to Big Tech’s business conduct, all of which perpetuates and exacerbates known harms to people of color and vulnerable groups in new, more pervasive ways than in other more strictly regulated markets.

The three policy fronts described here are not an exhaustive list, nor a comprehensive technology policy prescription. But they tackle the critical areas where Big Tech has more influence, and where regulators, researchers, and journalists have found the most disturbing risks to democracy and social and racial equity. With a split Congress, GOP leaders like Ohio Congressman Jim Jordan (R-OH) have more room for maneuvering the policy focus away from these three fronts toward, for example, the view that specific content moderation restrictions should be imposed on social media firms. It will be key for the White House, policymakers who seek bipartisan agreement, and other organizations that support reforming these policy levers, to elevate their voices if meaningful progress is to be attained by 2024.

Authors

Karina Montoya
Karina Montoya is a journalist with a background in business, finance, and technology reporting for U.S. and South American media. She researches and reports on broad media competition issues and data privacy at the Center for Journalism & Liberty, a program of the Open Markets Institute, in Washing...

Topics