Mental Health App Governance Challenges – Soft Law as a Way Forward
Angelina Manion, Benjamin Faveri / Nov 11, 2024In recent years, the rise of artificial intelligence (AI) technologies has prompted efforts at the state, national, and international levels to establish rules and guidelines for their use. One area where these technologies have shown promise is in mental healthcare, particularly through the emergence of mental health apps such as PTSD Coach and BetterHelp, which were released in 2010 and 2013, respectively. Since then, the number of mental health apps available to consumers has exploded, with estimates suggesting there are now 10,000 to 20,000 mental health apps available to consumers.
While these apps offer potential benefits, chief among them improved access to mental healthcare, they also face significant governance challenges, many of which were exacerbated by the COVID-19 pandemic. In response to the mental health crisis stemming from this pandemic, some US federal agencies relaxed policies to broaden access to digital health services, including mental health apps. However, that magnified pre-existing mental health app governance challenges, particularly around data privacy and security, safety and efficacy, and integration into traditional healthcare models. These challenges are being addressed through various US federal and state legislative (hard laws) and international standardization efforts (soft laws) to varying degrees of success while leaving several gaps in the landscape.
In this article, we briefly outline how these challenges are being addressed, identify where these gaps are, and provide a three-part soft law governance framework as a way forward. The three parts are a code of conduct, a third-party certification program, and an online app rating site for patients and providers that serves as an online registry of certified mental health apps.
1. Governance Challenges and Existing Governance Efforts
The marketplace for mental health apps has been described as a “wild west,” as traditional hard laws have struggled to keep pace with the rapid development of these technologies. As a result of this pacing struggle, mental health app governance challenges have surfaced, particularly concerning their privacy and security of user data, safety and effectiveness, and integration into the traditional healthcare system.
Privacy and Security
Mental health apps collect sensitive consumer data, such as users’ thoughts and behaviors, raising significant privacy and security concerns. This personal information can be compromised through data breaches or unauthorized sharing with third parties. For example, BetterHelp, a leading mental health app with ~2 million users, shared personal data with Facebook and other companies for advertising purposes, despite promises to protect consumer privacy. Following this breach, the Federal Trade Commission (FTC) recommended partial compensation for customers who subscribed and paid for BetterHelp services between August 1, 2017, and December 31, 2020, the timeframe of the breach. Despite this well-publicized breach, privacy regulations were relaxed during the COVID-19 pandemic to expand access to mental health services, further heightening privacy and security risks. In 2020, the US Department of Health and Human Services allowed healthcare providers to use audio and video technologies without the risk of HIPAA-related fines for privacy and security violations.
Beyond the risk of unauthorized data transfers, mental health apps also carry security risks associated with the devices themselves, such as exposure to computer viruses and the potential loss of the device. This challenge has become more prevalent as health systems worldwide emphasize “data liquidity”—enabling patients to access their health data via apps. Once patients download and store this information on personal devices, HIPAA protections no longer apply.
To protect their privacy, users are advised to research apps carefully and choose those with strong privacy policies and safety features. Soft law approaches, such as industry standards and guidelines, could help alleviate the burden on consumers by ensuring baseline privacy and security in mental health apps.
Safety and Efficacy
The safety and efficacy of mental health apps can vary widely depending on the app’s design, features, and the specific needs of the user. While significant research has focused on the privacy and security of data stored by these apps, less attention has been given to evaluating their actual safety and effectiveness. A key regulatory challenge stems from the 21st Century Cures Act, which exempts many mental health apps from FDA oversight by classifying them as “general wellness” tools rather than medical devices. This exemption enables many apps to bypass the safety and efficacy standards typically required under the Federal Food, Drug, and Cosmetics Act.
Even for apps under FDA oversight, many receive approval through expedited pathways with less stringent review processes, resulting in inconsistent safety monitoring and limited assurance of effectiveness. For example, certain suicide prevention apps have been found to suggest dangerous methods of death under the guise of restricting access, potentially putting users at a greater risk. This highlights the crucial need for governance mechanisms that ensure more rigorous safety and effectiveness of mental health apps. Some states, like Alabama, have responded to this need with legislation such as SB 272 (2022), which seeks to balance accessibility to mental health services with enhanced safety protocols for telehealth. SB 272 outlines specific conditions for permissible telehealth services and mandates that practitioners maintain a high standard of care, diligence, and expertise in providing these services.
Integration into the Traditional Care Model
While mental health apps have the potential to complement traditional mental health services by providing insights into patient well-being, their integration into the traditional model of care faces challenges. While these apps offer accessible, personalized, and potentially effective support – complementing traditional mental health services and assisting users in monitoring thoughts and behaviors, which can provide valuable insights for professionals—there are ongoing concerns about how their integration might affect the overall quality of care.
This was evident in Illinois’s proposed (now defunct) Safe Patient Limits Act, which aimed to prioritize the traditional care model over the use of AI technology in healthcare settings. The traditional care model relies on research-backed guidelines, making it difficult to incorporate mental health apps due to uncertainties around their benefits and outcomes. Despite their increasing use, the challenges of integrating mental health apps into traditional care models remain under-addressed. This is particularly problematic for apps offering treatment plans or advice, as they often lack the peer-reviewed scientific support required for other FDA-regulated medical devices.
2. Three-Part Soft Law as a Way Forward
Current mental health governance efforts inadequately address these three challenges, especially safety, efficacy, and integration into traditional care models. Policymakers should prioritize these challenges in future regulatory efforts. A strong governance framework utilizing a three-part soft law mechanism – comprising a code of conduct, a third-party certification program, and an online app rating platform – would more effectively address these mental health app challenges. This framework would enhance mental health app safety, privacy, and integration into traditional care, benefiting both patients and healthcare providers by promoting trust and accountability.
The first component of this proposed framework involves establishing a code of conduct for mental health apps. Defined by industry experts and stakeholders, the code would include essential requirements that target these recurrent challenges:
- Validation of claims made by mental health apps through independent, objective empirical assessment, ideally using randomized control trials or observational data;
- Robust data privacy commitments, ensuring that sensitive personal information is not repurposed;
- A strong commitment to data security;
- Transparency regarding app functionality, data collection and usage, and real-world performance metrics; and
- A commitment to continuous improvement based on user experience and feedback.
Although not legally binding, adherence to this code of conduct could be monitored through third-party certification, the second element of this framework.
The certification program would help identify compliant apps and could feature certified apps on a public registry (like Mozilla’s mental health app registry with privacy ratings). This would help clinicians and patients locate trustworthy apps, promoting app integration into traditional care settings.
Lastly, an online rating system would allow patients and clinicians to provide direct feedback on mental health apps. Like consumer reviews on platforms like Yelp, this rating system could feature a five-star scale with brief descriptions, allowing users to comment on the app’s quality, usability, and overall effectiveness. This final component would enhance transparency and accountability, giving prospective users and professionals insight into the app’s performance in real-world settings.
By integrating these soft law solutions—codes of conduct, third-party certification, and rating systems—this coordinated governance effort can bring order to the current Wild West of mental health apps. This approach would ensure mental health apps meet standards of safety/efficacy and privacy, creating a unified standard for the industry and addressing existing regulatory gaps. Ultimately, this would also facilitate the integration of these tools into the traditional care model.