What Does Europe's Digital Services Act Mean for Targeted Political Advertising in the U.S.?
Pooja Iyer / Nov 17, 2022Pooja Iyer is an advertising doctoral student at the University of Texas at Austin and a graduate research fellow at the Center for Media Engagement’s Propaganda Research Lab, studying big data, privacy, surveillance, and propaganda in advertising, marketing, and media.
Former U.S. President Barack Obama, one of the earlier political figures to leverage social media to his advantage, has changed his tone towards social platforms. In early April, he tweeted that the Obama Foundation would begin work to combat disinformation and its corrosive effect on trust in public institutions. One week later, Obama followed up this message in a speech at Stanford University highlighting how quickly falsehoods proliferate on social platforms, the polarizing effect on society, and the outsized impact on democracy. He announced he would work to help curb disinformation. He noted that “Europe is forging ahead with some of the most sweeping legislation to regulate the abuses that are seen in big tech companies,” and mentioned the EU Digital Services Act (DSA) specifically.
Picking up where GDPR left off
Although President Obama's message focused on platforms, other types of technologies that contribute to the mis-, mal-, and disinformation ecosystem must not be ignored. Through the purchase, sale, and trade of individuals' personal information, data brokers have incredible power that encourages privacy infringement and the proliferation of disinformation through targeted, politically based advertising. The European Commission has in fact initiated two legislative packages in the EU – the Digital Markets Act (DMA), which targets regulating gatekeeper online platforms, and the Digital Services Act (DSA), which will govern online intermediaries and platforms. While there are overlapping aspects within both the DMA and DSA, for policymakers in the United States interested in combating the effect disinformation has on democracy, the DSA model may set a foundation to achieve this goal, provided DSA gets its mechanism right.
The DSA is Europe's latest effort to harmonize how to hold accountable online platforms so internet users, and their fundamental rights, are better protected. Years in the making, European legislators reached a political agreement on the text of the measure on April 23, followed by a formal adoption by the European Parliament on July 5, and final Council approval on October 4th, 2022. The DSA is highly anticipated because of key legislative elements that will introduce new constraints on tech firms, including very large online platforms (VLOPS) and search engines (VLOSES), intermediary services, and hosting providers. The DSA will also establish that recipients of digital services hold the right to seek redress for harms inflicted upon them. New marketplace rules and content moderation guidelines will be implemented requiring platforms to comply with expectations that illegal content be removed swiftly. Although the DSA was expected to have more stringent rules around deceptive design to tackle “dark patterns”, it does not provide more restrictions than existing standards in consumer protection and data protection rules.
The official legislative text has been formally adopted by the European Council, further highlighting the role of VLOPS and VLOSES regarding advertising, namely, “in relation to illegal advertisements or manipulative techniques and disinformation with a real and foreseeable negative impact on public health, public security, civil discourse, political participation and equality”, where transparency is insisted on targeting criteria, advertising content, advertiser details, as well as funding agent of the advertisement. The goal here is to inflict less harm on advertisements delivered to “persons in vulnerable situations, such as minors”.
Furthermore, DSA places rules around the need “to obtain consent of the data subject prior to the processing of personal data for targeted advertising”, placing the agency back on individuals with regards to the use of their data. It also places a ban on the use of sensitive data, such as race, religion, and political or political affiliation to conduct targeted advertising, which will help tackle surveillance-based advertising as well as contribute to combatting disinformation campaigns.
Obligations for VLOPS include mandatory transparency on algorithmic systems to hold them accountable for societal harms, including the spread of disinformation, that stem from the use of their digital services. The EU's General Data Protection Regulation (GDPR) was supposed to solve the issue of leveraging sensitive data for targeted advertising and the spread of disinformation, but so far has lacked enforcement. To accomplish what GDPR could not, EU regulators must ensure that DSA has the teeth it needs to combat these societal harms.
Data brokers as part of the disinformation ecosystem
Another vital component that the DSA is looking to tackle is “manipulative (targeting) techniques” that “can negatively impact entire groups and amplify societal harms” as they contribute to the dissemination of disinformation, a phenomenon that is difficult for third parties to scrutinize. With this, the DSA has embarked upon regulations that restrict certain types of targeting and require more transparency. These provisions will likely impact the role of data brokers in the disinformation ecosystem. According to the Data Governance Act in EU, Data brokers, also known as data intermediaries, are defined as entities that allow trading of data, which includes handling of both personal and non-personal information.
The power that lies with data brokers is not only prevalent in the EU. The sale of personal data by brokers is a largely unregulated market in the U.S. as we found in the Propaganda Research Lab at the Center for Media Engagement (CME) at the University of Texas in Austin.
The Propaganda Research Lab at CME conducted research to understand the power data brokers have to geo-target individuals, how this ability enables wide-scale political influencing in the U.S., and how the world's toughest online regulators and legislation fall short to protect individuals against such behavior. In so doing, the team conducted eleven confidential interviews with current and former government officials, executives of data broker companies, academics, and members of civil society organizations.
Geolocation technology, at its core, offers data brokers the ability to identify individuals based on the geolocation of their cell phones. This is possible through geofencing which, as one Head of Political Advertising at a major advertising technology firm explained to us, is "when a phone [reports] actual location, volunteers’ information, and is triangulated from cell towers or because [the individual] is using an app." In the latter case, "there’s a [Software Development Kit] in the app somewhere that is reporting that geolocation out along with enough [information] about that person’s identity to be able to make them available for targeting."
Another interviewee, a CEO of major political data firm, added to this point that firms like theirs will "identify people based on the geolocation of the cell phone at the time of a campaign rally, try to match that information to cell phone numbers in voter files, and then be able to identify [individuals] as likely supporters." Targeting at this level is not fully accurate, because data points represent hundreds of thousands, if not millions, of phones at once. A Senior Strategist outlined the set assumptions their data firm typically makes to narrow datasets and provide tailored insights to clients:
[O]nce we have a mobile ID number that has been in one location, we can then basically tag. You know, it’s kind of a gross analogy with the way that wildlife is tagged …Same thing with your cell phone. Let’s say your cell phone was at this particular location, a church, a political venue, whatever. And then it goes back to the same address or roughly the same address like four to five nights in a row. We then kind of make the next set of assumptions that this cell phone belongs to x register.
Targeting as a means to effectuate influence
Making broad assumptions does lead to errors. For example, a cell phone may not belong to an anticipated registered voter, but instead a close relative. However, if a cell phone is associated with a voter, the same interviewee said "a whole trove of information on the voter [becomes available]. [Y]our address, we probably have a landline phone number, if you have it, about 75% chance of having your cell phone number, and even an email address. More importantly, we have demographic information on your age and gender." The societal harm is not in political advertising alone, but also in the use of certain forms of personal data to target vulnerable groups in disseminating disinformation.
A Director of Strategy at a political campaign strategy firm explained that other targeting methodologies exist whereas their firm targets people by making selections based on demography or other file information retrieved from polls or other research. Once completed, their firm pulls lists based on those factors and “exports those into Excel [spreadsheets] and matche[s] them to third-party databases of ad targets, consumer data, and geolocation data.” While harrowing, this seems to pale compared to over-the-top (OTT) targeting accessible through internet streaming platforms and delivered to TVs and other devices. For political actors, more personalized ads can be shown to individuals and allows traditional, pre-planned broadcast schedules and geographic limitations to be circumvented. Based on our research, we know U.S. data brokers behave in ways that would draw scrutiny from EU regulators under the DSA, if enforced as it should be.
Conclusion
Social platforms have been the main driver of expanded public discourse, but the apparent harms call for increased online regulation. However, scrutiny of other actors, such as data brokers, must not be put on the back burner. While the official legislative text of the DSA looks promising in the fight against disinformation, its enforcement, beginning in early 2024, will have to be carefully examined. The effect of targeting individuals to exact political influence is corrosive for democracy.
However, because the details always matter, the DSA's efficacy will ultimately be determined by how firm enforcement is; and if it is firm, the U.S. and other democracies would do well to follow suit. While the exact language of the DSA “may not be exactly right” for other nations, as Barack Obama put it, it is time for democracies to better coordinate efforts to address these complex problems, and to learn from one another.