Danya Sherbini is an MPP candidate and Irving B. Harris Fellow at the University of Chicago.
In the US, teletherapy became increasingly popular during the COVID-19 pandemic. In 2021, an estimated 21% of US adults used a teletherapy service, while mental health startups collectively raised $5.5 billion in funding. These platforms connect users with therapists for virtual counseling sessions. Although some benefits of in-person therapy may be lost in the shift to digital, teletherapy offers added convenience and flexibility for patients with access to a computer or smartphone and stable internet.
But the convenience of teletherapy comes at a cost beyond that of the service itself: your mental health data. Most teletherapy platforms collect a range of intimate information about users’ mental health, such as whether a user has been to therapy before or has had suicidal thoughts. This data is often shared with third parties, including social media companies, advertising technology companies, and data brokers.
Dozens of telehealth websites have been found to send user data, including URLs visited, full name, email, phone number, when a user initiated checkout, when a user added to their cart, when a user created an account, and users’ answers to health questionnaires, to Google, Facebook, Bing, TikTok, Snapchat, LinkedIn, Pinterest, and Twitter. One platform focused on substance abuse was found to use Meta’s pixel tracking tool to send identifiable user responses about self-harm, drug, and alcohol use to Facebook. Numerous websites tied to the national 988 Suicide and Crisis Lifeline were also found to have sent callers’ personal data to Facebook using the Meta Pixel. Mental health app Cerebral shared private information, including data from its mental health assessments, of 3.1 million patients with Facebook, Google, and TikTok in the second-largest breach of health data this year. BetterHelp, often hailed as the “top” teletherapy provider, was recently fined $7.8 million by the Federal Trade Commission for deceiving consumers after promising to keep sensitive personal data private.
Once shared, data obtained by third-party data brokers continues to be sold. A February 2023 study by researchers at Duke University’s Sanford School of Public Policy illustrates the gravity of the unregulated data trade when it comes to mental health data, finding that many data brokers are marketing highly sensitive information on people’s mental health conditions. While some is aggregated and anonymized, some of this data is personally identifiable, including names, addresses, and incomes.
At the heart of this issue is the limited scope of the Health Insurance Portability and Accountability Act (HIPAA), which only applies to specific “covered entities,” defined as healthcare providers like hospitals, medical clinics, and health insurance companies. Enacted in 1996, the law prevents individuals’ sensitive health information from being shared without their knowledge. Specifically, the Privacy Rule within HIPAA requires covered entities to enact safeguards to protect “individually identifiable health information” when it is created, received, stored, or transmitted.
HIPAA does not apply to health technologies like apps, websites, and devices unless it is considered a “business associate” of a covered entity. Under the regulation, a business associate is defined as “a person [or entity] who creates, receives, maintains or transmits protected health information (PHI) on behalf of a covered entity or another business associate.” While this definition at first glance seems to be broad enough to cover teletherapy and other telehealth platforms, guidelines developed by the U.S. Department of Health and Human Services indicate that it’s much narrower in scope.
To be considered a business associate, an app or platform must be directly contracted by a healthcare provider for its services. Otherwise, any direct-to-consumer app or platform in which users input their personal health information is not considered a business associate, even if users input data provided by their healthcare provider, are directed to use the app by their healthcare provider, use it to send personal health data directly to their healthcare provider, or use it to access test results from their healthcare provider.
Consumers, on the whole, lack this awareness. Most consumers don’t distinguish between a message sent to their doctor via a hospital web portal and one sent in a telehealth platform; while these are two separate entities under HIPAA, the consumer is sending the same personal information. Dense, long, and vague privacy policies add more confusion and subterfuge to the mix, often not stating if a company is indeed a business associate under HIPAA.
Of course, the practice of sharing and selling personal data is not unique to teletherapy. The collection, sharing, and selling of user data for advertising is the lifeblood of today’s internet. But when the lack of privacy controls intersects with personal health data specifically, it gives way to numerous potential harms, such as discriminatory pricing for insurance coverage, reputational and financial harm, legal risks and potential prosecution by law enforcement, unwanted surveillance, and predatory or harmful advertising. Even setting these harms aside, the idea that private information about your mental health can be shared with countless third-party companies is spooky, to say the least.
The simple truth at the center of this mental health data privacy crisis is that there is no comprehensive federal law that regulates how companies use consumer data. The most straightforward policy reform when it comes to protecting health data privacy is to update HIPAA to include teletherapy services (in addition to other telehealth services, wellness apps, digital addiction recovery services, and online pharmacies). But this would not holistically solve the issue, as HIPAA still allows data sharing with user consent. And, as mentioned above, the idea of consent holds virtually no meaning given the lack of consumer awareness and choice. If this myth of user consent persists, then this HIPAA update would likely not accomplish much. Instead, the policy must be revamped to include limitations on the amount and type of data health tech companies can collect and share. Mental health data in particular—as well as other sensitive health data related to substance abuse, reproductive health, and sexually transmitted diseases—should be prohibited from being shared with advertisers and other third parties.
In addition to overhauling HIPAA, there are other technological and policy reforms that can protect data privacy, including banning third-party cookies from browsers and websites and prohibiting the use of ad trackers. Another solution to fill in the gaps left by lackluster privacy policies is to require real-time notifications of data collection, usage, and tracking to users. Rather than checking the “agree” box when first using a website or app and never again encountering a notice, apps and websites could institute regular notifications that specifically tell users what data is being collected, when, and why, and provide an alternate user experience if the user chooses not to provide their data. The lack of consumer awareness and understanding of how data is collected and used is a clear indication that there is also a need for more education on data privacy and security.
After being sanctioned by the Federal Trade Commission, BetterHelp defended its data-sharing practices as “industry-standard.” And they weren’t wrong: even 99% of hospital websites share patient data with advertisers. Current US health privacy laws place limited restrictions on only a small scope of healthcare entities, while teletherapy and other telehealth platforms face virtually no barriers despite having access to highly sensitive health information. Rather than continue with the status quo, the US needs to enact policy that loosens the grip of today’s data economy in order to preserve both healthcare access and privacy: two rights that ought to be fundamental and universal.
Danya Sherbini is an MPP candidate and Irving B. Harris Fellow at the University of Chicago, where she specializes in technology policy and data science. Prior to graduate school, she worked in the advertising technology sector. A long-time proponent of digital equity, she has campaigned for expanded broadband internet access in New York City and has worked with the National Telecommunications and Information Administration (NTIA) to analyze national internet use data. She currently serves as the Executive Editor of commentary at The Chicago Policy Review.