Maia Levy Daniel is a tech policy and regulation specialist and a research affiliate at the Center of Technology and Society (CETyS) at Universidad de San Andrés in Argentina.
Let’s say you are at a family gathering and talking about Taylor Swift’s new music video. As some of the people in the room have no clue about what you are saying, you decide to flaunt your new smart TV and play the video for everyone to watch. You turn on the TV and look for the video on YouTube. Before it starts, a 15-second ad is automatically played– and it is not possible to skip it. The ad is about the treatment of an illness that everyone in the family knows you suffer from. Once the ad finishes and Taylor Swift’s video starts, no one is thinking about the song anymore; they are now worried about you, and the atmosphere suddenly turns uncomfortable and a bit gloomy.
Such an experience is not uncommon. You may have been looking for information on treatments on the web and YouTube used that data to show you ads on the issue. The scenario could get even worse if the people you’re watching the video with do not know about your illness. Many know how the system works and the reason why those ads are shown to a particular person. Thus, we not only have lost control of our most personal data but also have been pushed to sometimes share our innermost secrets with everyone else –whenever platforms decide it’s the right time to do it.
At this point, it comes as no surprise that companies use the data they obtain from our online interactions and browsing habits to sell us different products, or to directly sell the data or inferences from it to other companies. Even knowing this, we still use social media, messaging apps, and search engines because they are essential to our daily lives. It is hard to escape from Gmail, WhatsApp, Facebook and Twitter. Once these platforms have our data, we lose track of what they do with it. We are used to seeing ads on the phone about products we looked for on the computer or tablet. But what happens when we add the fact that these ads can be now shown on a device that is usually shared with other people, as in the Taylor Swift video example above?
We know that the famous phrase “On the internet nobody knows you’re a dog” clearly no longer applies. However, things have gotten much more complex. Over the years, we have lost the most valuable feature of the internet: the possibility of anonymously asking questions we might be embarrassed to pose to a person. That was one of our most personal spaces. Although it is true that phones and computers are now frequently personal, other devices such as TVs and tablets may be shared with other people.
There are innumerable examples of very sensitive queries we make using digital apps. People asking about illnesses and possible treatments; teenagers asking for advice on very intimate identity issues; or people trying to get information on how to access an abortion. Exposing this private information is not only a breach of privacy but, in some cases, it could also ruin people’s lives.
Thus, platforms should be extremely cautious. YouTube can be very helpful to get access to essential information, but the company needs to make sure that the sensitive data it collects through a device is not necessarily used to show ads on a different one –even if the user account is the same. Location and IP addresses can also be avoided to target ads– the same capability is presently being used to show ads on devices different from the one you typically use to browse for something you are interested in.
But platforms can’t always be trusted to self-regulate. Regulation must address these concerns. The latest amendments to the European Digital Services Act (DSA) proposal will give users a great deal more control over the use of their personal data, and proposals in the United States would limit the use of personal information to targeted ads. In Latin America, there aren’t any regulations specifically on this topic– in 2021 Mexico passed a law on advertising and digital advertising, but it doesn’t include any provision on the use of personal data. However, the DSA might have an important impact in the region, similar to the influence the European General Data Protection Regulation (GDPR) has had on data protection laws in various Latin American countries.
We now have more tools than we did a few years ago to protect ourselves. But to restore privacy protections, users need governments to take action, and for tech companies to take responsibility.
Maia Levy Daniel is a tech policy and regulation specialist. She is a research affiliate at the Center of Technology and Society (CETyS) at Universidad de San Andrés in Argentina and was Director of Research and Public Policy at Centro Latam Digital in Mexico, among other relevant positions in the field. Maia has worked across various sectors and has written extensively on issues around artificial intelligence governance, platform regulation, and content moderation. She holds an LL.M. from Harvard University, a Master’s in Public Policy from Universidad Torcuato Di Tella in Argentina, and a Law degree from the same university.