Caroline Sinders is a critical designer and artist examining the intersections of artificial intelligence, abuse, and politics in digital conversational spaces. Justin Hendrix is cofounder, editor and CEO of Tech Policy Press. Cooper Quintin is a security researcher and senior public interest technologist with the EFF Threat Lab, and a board member of Open Archive. Leila Wylie Wagner is a project manager and disaster response professional based in New Orleans, Louisiana.
People are increasingly using “secure messaging apps”—communications apps which completely, or partly, use end-to-end encryption—to share information, engage with one another, and conduct commerce. But while the promise of secure messaging is private communications and user control over the spread of personal or organizational information, the reality is more complicated.
An overlapping and interconnected set of engineering, design, and system factors, coupled with varied user behaviors, create the conditions for individuals to subvert their own interests or the interests of their communities on secure messaging apps. Recent headlines underscore the practical implications of these circumstances, for instance, for those seeking reproductive healthcare in states where it has been criminalized, protestors opposing authoritarian regimes and law enforcement, and people operating in war zones. Beyond risks to privacy, there are concerns about the spread of malicious propaganda, mis- and disinformation on private message apps, which are difficult to moderate.
With funding from a program at Omidyar Network focused on private and trustworthy messaging, we are exploring how design failures, technical flaws, manipulative patterns and adversarial behaviors by various parties may combine to produce malicious effects in secure messaging. Such malicious effects may be patterns that nudge users to share personal information or forward messages to insecure channels, suggestive user interfaces or flawed security mechanisms that can compromise security, or design features that encourage the propagation of mis- and disinformation or enable harassment and abuse.
We are also concerned about other technical issues resulting from the implementation of various secure messaging apps, such as: encouraging unencrypted backups, the ease of falling back to unencrypted forms of messaging, the proper implementation of cryptographic protocols, the presence of unencrypted metadata, and the implementation of various anti-abuse features such as client side content scanning.
More specifically, we are investigating questions such as::
- What are the design choices app makers have made that lead to or confuse the user into making poor security decisions, or that put the user at risk?
- Are there insecure apps that use ‘security UI’ to look more secure than they are?
- Are secure messaging apps betraying their users’ trust through poor technical choices or implementations?
- How does this relate to content moderation capabilities on messaging apps?
- How do people end up compromising their own safety and access to quality information, and how do malicious actors- from oppressive government agents to propagandists to criminals- seek to take advantage?
- What is the prevalence of these phenomena, and what are the policy solutions?
- What are the necessary considerations for violent or illegal activity, and is it possible to have trustworthy, private messaging that does not conflict with trust and safety measures and efforts to mitigate that behavior?
(As an aside, we recognize in most jurisdictions policies and laws do not adequately address the competing equities that surround trust and safety topics, like online harassment, and that there are important but troubling conversations around topics like child sexual abuse material (CSAM) that we will seek to acknowledge in further research.)
Employing a combination of technical assessment and ethnography, this project will develop outputs designed to drive change. We seek to:
- Perform code and protocol analysis of selected secure messaging apps using established techniques;
- Develop connections and distribution networks for the research to government officials, regulators, app developers, and organizations in civil society;
- Respond to and anticipate government hearings, workshops, comment periods and other points of engagement;
- Where possible build relationships with officials and executives to advance the development of good policy.
Our goal is to produce findings that effect change, including insights that inform better industry practices and standards as well as potential new government regulation and legislation. As Freedom House notes in its latest global report on internet freedom, “[r]obust encryption is fundamental to cybersecurity, commerce, and the protection of human rights,” while “[w]eakening encryption endangers the lives of activists, journalists, members of marginalized communities, and ordinary users around the world.” We posit that encryption alone is only part of this equation; design and user behavior are the other complex variables.
If you have expertise or experience that is relevant to this investigation, (such as cryptography, secure app development, UI work, or work with at-risk communities relying on secure messaging apps) or are aware of other relevant efforts, we hope to hear from you. The project team seeks to engage with experts and community groups that have a stake in the answers to these questions, and thus we seek to enter into dialogue with any interested parties. Stay tuned for more information about our findings, which will be published as a report and discussed in public events in spring and summer 2023.
Secure messaging has great promise, but the flaws and vulnerabilities in the current state of the art suggest civil society groups, technologists, and communities must work together to design new ways of developing and using technology to pursue just outcomes. We hope to contribute to that broader project. Watch this space.