Tech Policy Press Technology and Democracy Tue, 06 Dec 2022 14:52:14 +0000 en-US hourly 1 Tech Policy Press 32 32 193615856 How To Fix Canada’s Proposed Artificial Intelligence Act Tue, 06 Dec 2022 13:49:59 +0000

The post How To Fix Canada’s Proposed Artificial Intelligence Act appeared first on Tech Policy Press.

Christelle Tessono is a Tech Policy Researcher based at Princeton University’s the Center for Information Technology Policy (CITP), where she develops solutions to emerging regulatory challenges in AI governance. 

Sonja Solomun is the Deputy Director of the Centre for Media, Technology and Democracy at McGill University, where she is completing her doctorate. She works on platform governance, AI and climate justice. 

Canada. Shutterstock

Canada is finally joining international efforts to regulate artificial intelligence. In June 2022, the Canadian government tabled Bill C-27, the Digital Charter Implementation Act, 2022, consisting of three separate acts, including the Artificial Intelligence and Data Act (AIDA), Canada’s first attempt to regulate AI systems outside privacy legislation.   

After years of growing calls to regulate AI, AIDA is an important and encouraging first step. But it requires further consideration to provide adequate oversight, accountability and human-rights protections that would elevate it to international precedents in this space. Together with researchers from McGill University’s Centre for Media, Technology and Democracy, the Cybersecure Policy Exchange at the Toronto Metropolitan University, and the Center for Information Technology Policy at Princeton University, we outlined a few key challenges and recommendations for AIDA in a new report, AI Oversight, Accountability and Protecting Human Rights: Comments on Canada’s Proposed Artificial Intelligence and Data Act. 

Below, we summarize our first reactions to the proposed legislation:

1. The Canadian government did not hold a formal public consultation on AIDA. 

AIDA came as a surprise to many working in this space. There were no public consultations on Bill C-27, and the previous iteration of the Bill did not include an AI regulatory framework. If closed-door consultations occurred, there appear to be no publicly accessible records to account for them. 

As noted by economist and journalist Erica Ifill, the absence of meaningful public consultation is evidenced by the absence of provisions acknowledging AI’s capacity to exacerbate systemic forms of discrimination. A more robust Bill will require holding meaningful public consultation with the specific goal of enabling greater interaction between technical experts, as well as civil society groups, representatives of marginalized communities, and regulators. 

2. Independent oversight is missing. 

In its current iteration, AIDA lacks provisions for robust independent oversight of the AI market. Instead, it proposes self-administered audits at the discretion of the Minister of Innovation, Science, and Industry in the event of suspicion of Act contravention. 

The audit can be done internally by the company under scrutiny, or by hiring the services of an independent auditor – which is at the discretion and expense of the audited company. However, recent findings by Deb Raji, Peggy Xu, Colleen Honigsberg, and Daniel Ho, demonstrate the poor quality of audits when the audited company in question selects and compensates its auditor. An adequate audit mechanism would ensure that auditor selection, funding, and scope are not established by the audited company – but instead by regulation developed through independent oversight. 

Moreover, the Bill creates the position of the Artificial Intelligence and Data Commissioner, who is intended to be a public servant designated by the Minister. This role is tasked with assisting the Minister in the enforcement and administration of the Bill, yet without the power to draft regulation or to enforce AIDA beyond the discretion of the Minister. As such, the Commissioner cannot make critical policy interventions independently, as they report directly to the Minister. 

As a result, to effectively regulate the AI market in Canada, we recommend that an independent body be vested with the power to administer and enforce the law. We suggest empowering the existing Office of the Privacy Commissioner, or creating an independent body that can enforce the Act. 

Canada can look to several international examples. The European Union’s Digital Services Act (DSA) provides another level of transparency and oversight by mandating audits by independent third-party auditors with both technical knowledge of algorithms and other expertise. The DSA would give national authorities – “Digital Services Coordinators” – and, in some circumstances, the European Commission, the authority to conduct on-site inspections of these companies. 

Closer to home, the United States’ proposed Algorithmic Accountability Act would have authorized and directed the Federal Trade Commission (FTC) to issue and enforce regulations that would require certain entities using personal information to conduct impact assessments and “reasonably address in a timely manner” any identified biases or security issues. 

3. AIDA excludes government institutions. 

The Act does not apply to products, services, or activities under the direction of the Minister of National Defence, the Canadian Security Intelligence Service (CSIS), the Chief of the Communications Security Establishment (CSE), or “any other person who is responsible for federal or provincial departments or agencies”. 

The absence of regulation for these law enforcement and public safety agencies poses significant human rights risks. As illustrated by the Royal Canadian Mounted Police’s unlawful use of facial recognition technology from Clearview AI and the Department of National Defence’s procurement of two AI-driven hiring services, there exists a dangerous precedent that the Canadian government must address. 

​​Meanwhile, the European Union’s Artificial Intelligence Act only exempts AI systems developed or used exclusively for military purposes. This is only a partial solution – it is imperative that AIDA’s framework be broadened to include government institutions given the country’s history of unlawful use by public bodies. 

4. Inconsistent definitions for AI systems 

More broadly, Bill C-27 has significant definitional inconsistencies with regards to AI systems, which could lead to an uneven application of the legislation. 

Instead, it is crucial for Bill C-27 to have definitions that are consistent and technologically-neutral, i.e. addressing the source of concern around a technology rather than the technology itself. And, it is important that definitions are future-proofed enough to account for AI’s propensity to exacerbate existing forms of systemic discrimination, such as sexism and racism. 

Instead of defining AI systems based on a limited number of techniques– such as predictive analytics, genetic algorithms, machine learning, or deep learning– the legislation could focus on defining these technologies based on their application and how end-users interact with them. One possibility is to define AI systems based on their ability to generate outputs such as predictions, recommendations, and other types of decisions. 

Compared to the EU’s prescriptive approach in classifying “high risk” AI systems, AIDA relies on a more principles-based approach, leaving key definitions of “high impact system” to be defined in future regulation. 

5. Bill C-27 fails to adequately address the human rights risks of AI systems. 

More broadly, Bill C-27 does not sufficiently address the human rights risks that AI systems pose, putting it out of step with international precedents. Surprisingly, there are no explicit provisions which acknowledge the well-established disproportionate impact these systems have on marginalized populations such as BIPOC, 2SLGBTQIA+, economically disadvantaged, disabled, and other equity-deserving communities in Canada. 

To address these important gaps, the government should consider developping a framework on the processing of biometric information, have high-level protections for children under 18, and include explicit prohibitions on certain algorithmic systems and practices.

For instance, while the EU AI Act prohibits certain practices such as using AI for “real time” biometric identification of individuals in public spaces for law enforcement, including social scoring systems, those intended to subliminally manipulate a persons behaviour, and those likely to cause physical or psychological harm – AIDA does not currently outline any outright prohibitions on AI systems, including those deemed to present appropriate risk. 

The Illinois’ Biometric Information Privacy Act (BIPA) also outlines strong prohibitions against private collection, disclosure and profit from biometric information, along with efforts (in both Illinois and Massachusetts) to restrict uses for law enforcement. 

But there is one area of AIDA that could be especially promising. Given Canada’s stated emphasis on “protecting children with Bill C-27,” we remain hopeful that the government will include special category status for children under 18, and will further elaborate on AIDA with high levels of privacy protections by default, especially against commercial use of children’s data. 

We are encouraged by the inclusion of valuable rights to erasure for children’s data in Bill C-27, which also considers the personal information of minors as “sensitive information”, a significant step according to legal experts. Since the age of majority is not defined in the Bill (and poses some jurisdictional tensions among the different Canadian provinces) Canada should follow the United Kingdom’s Age Appropriate Design Code (“Children’s Code”) which sets under-18 as the legal age of a child and outlines design standards to minimize children’s data collection by default.

– – –

Overall, AIDA has a lot of problems and requires significant rewriting. We hope that in the coming months the Canadian government will be receptive to these recommendations and move towards an AI governance framework that centers accountability, independent oversight and the protection of human rights.  

The post How To Fix Canada’s Proposed Artificial Intelligence Act appeared first on Tech Policy Press.

Barriers to Strong DSA Enforcement – and How to Overcome Them Mon, 05 Dec 2022 16:51:03 +0000

The post Barriers to Strong DSA Enforcement – and How to Overcome Them appeared first on Tech Policy Press.

Julian Jaursch is a project director at not-for-profit think tank Stiftung Neue Verantwortung in Berlin, Germany. He analyzes and develops policy proposals in the areas of platform regulation and dealing with disinformation.

Mathias Vermeulen inspired this text with thoughtful remarks on hurdles to DSA enforcement and provided the initial list and analysis of potential enforcement challenges (remaining errors are not his). The text also draws from a SNV policy paper from October 2022.

European Commission Building- Brussels, Belgium
European Commission building. Brussels, Belgium. Shutterstock

In mid-November, the European Union’s Digital Services Act (DSA) entered into force. Granted, the new European rules for social media, online shops and video apps will not apply just yet, but the formal process is officially over. Businesses are getting ready to implement the rules, from establishing or fine tuning notice-and-action mechanisms to explaining recommender systems to creating more transparency around online ads – and the European Commission and EU member states are getting ready to check their compliance. 

Such rules are well-intentioned and could ideally help to make the online experiences of millions of people better. Yet, that all depends on strong enforcement. Regulators, even the most advanced ones, face considerable barriers to enforcement. It is important to understand such barriers, and to develop solutions to overcome them.

Potential Barriers to Strong Enforcement

Limited resources and staff

To address the most obvious and least surprising barrier first: Considerable budget increases at oversight agencies, especially to hire expert staff, are necessary. Surely, regulators are clamoring for more money even if there is no law as big as the DSA new on the books. But especially considering the wide-ranging obligations in the DSA, for instance, regarding data access, risk assessments and trusted flaggers, this call for resources cannot be disregarded. The substantive negotiations for the DSA might be over, but the budget negotiations regarding funds for staff and other expenses like hard- and software are just getting started. They might be even harder than the negotiations over the content of the DSA, which were highly contested.

Lack of expertise

Strongly connected to the budget considerations is the need to find and retain experts versed in DSA topics. Many regulators across the EU already have years of experience in enforcing EU-wide rules, dealing with corporate giants and adapting to new laws. However, with the DSA’s emphasis on data gathering and analysis, fundamental rights considerations in the corporate context along with economic, internal market questions, a new crop of experts will be necessary at oversight agencies. So far, regulators often rely on expertise from specific fields, especially law and economics. The idea to attract experienced practitioners and academics from a variety of disciplines and/or working across disciplines to work at regulators is only slowly taking hold. In addition, administrative structures and hiring practices might be a hindrance.

Risk aversion among regulators

EU policymakers have hailed the DSA as a milestone and many outside observers have also largely viewed the law in a positive light, despite serious shortcomings. So, it’s understandable that there is some enthusiasm among regulators to be at the forefront of enforcing these new rules. For instance, in Germany, the media authorities and the telecoms regulator have called for a role in DSA enforcement. Yet, with the DSA’s high profile also comes high risk for regulators to mess up the EU’s attempt to reign in some potentially harmful corporate practices. Thus, there is concern that some regulators might not be willing or able to tackle the big DSA questions. Instead of taking an active role to address issues regarding transparency or trusted flaggers, they could stay content with doing the bare minimum, for instance, ensuring the notice-and-action mechanism works.

Turf wars in member states and with the Commission

In almost no member state was it immediately clear who could and should take on the key oversight role of the national-level Digital Services Coordinator (DSC). Some governments will just announce which agency will take on this important task, other countries will have to go through the parliamentary process. Consultation processes, while crucial and welcome, add months to this timeline. Both within these deliberations and after a DSC has been designated, turf wars among agencies could seriously impede DSA enforcement. Since the DSA covers various regulatory fields, many regulators stake a claim, as the example from Germany shows. A combative approach where regulators are unwilling to cede ground instead of emphasizing collaboration on cross-cutting issues would be a huge barrier to strong enforcement. This also applies to potential conflicts between member states and the Commission.

Weak links to outside expertise

Researchers and civil society actors are given bigger roles in the DSA than in many other laws. Researchers can request data from platforms and help understand risks associated with platforms. Civil society actors are involved in enforcement as trusted flaggers, for example, or as outside observers to be consulted by an advisory board that brings together all DSCs and the Commission. If agencies do not have strong links with academia and civil society in place, they risk weaker enforcement.

Litigation by tech companies

Tech companies could try to stop or delay enforcement of the DSA by challenging either the law itself and/or secondary legislation, such as delegated acts. It might start with big companies questioning whether they are truly “very large online platforms”, but corporate lawyers could surely also take issue with other language in the text. Whether these legal concerns are valid or not, litigating them would take time and delay implementation of the new rules. This could be observed, for instance, in Germany, when some companies challenged rules regarding content moderation in the Network Enforcement Act (NetzDG).

What Can Be Done to Overcome These Barriers

It is important for regulators as well as civil society and researchers to understand these challenges related to DSA enforcement. Identifying hurdles does not have to lead to despair and lethargy, however, but can help to find ways to collaborate and overcome potential obstacles. To start again with an obvious opportunity to clear some enforcement hurdles: The Commission, as well as national regulators, need to be equipped with adequate resources. They are more than mere secretariats monitoring DSA rules. Without them, the law will have no effect. That means that EU and national budget negotiations should allow for budget increases. This additional budget is especially needed to hire a new set of oversight experts from various disciplines and with diverse professional experiences. Hiring processes should be more flexible and shorter, and they should also be open to experienced people without formal academic training.

In addition to the technical hiring process, regulators should continue their push towards becoming attractive, agile destinations for top talent. This does not only relate to wages but speaks more generally to a shift in mindset that is only beginning to take hold at some regulators. The DSA accelerates the need for independent platform oversight agencies to be data-driven, well-connected and collaborative organizations. Externally, that applies to the actual enforcement; and internally, that concerns the working conditions and atmosphere at the Commission and DSCs.

To avoid or alleviate turf wars, member states should emphasize a cross-sectoral, collaborative approach at their DSCs. In practice, this could mean case-based task forces led by a DSC employee that brings together expertise from various regulators, depending on the risk or topic at hand. It would be clear that there is only one DSC per member state but other agencies would ideally not feel left out, knowing that there is a viable mechanism to be included on those topics that concern them. As a first, short-term practical measure, building the information exchange system that the DSA requires to connect the Commission and national bodies should be prioritized. It could serve as a testing ground for cross-national and cross-sectoral communication, even before the first cases land on regulators’ desks.

Some regulators already strive to connect with researchers and other outside experts, but such dialogues need to be further structured and expanded. This could be achieved by creating fellowships that allow experts to serve at the Commission or other regulators for a period of time for a specific question or topic. Thematic, regular roundtables with diverse stakeholders could be another format to explore, as are advisory councils.

Lastly, crucial delegated acts need to be developed quickly and in a transparent, inclusive process. With some of these objectives in mind, the Commission and member states can contribute to robust enforcement and overcome potential barriers. It is understandable that both the ambition and the abilities to create such strong oversight structures vary across governments and regulators. Yet, this only underscores the need to create an EU-wide, collaborative governance regime in which oversight bodies can support each other.

The post Barriers to Strong DSA Enforcement – and How to Overcome Them appeared first on Tech Policy Press.

Scrutinizing “The Twitter Files” Sun, 04 Dec 2022 15:00:33 +0000

The post Scrutinizing “The Twitter Files” appeared first on Tech Policy Press.

Audio of this conversation is available via your favorite podcast service.

On Friday, Elon Musk announced via tweet that documents related to Twitter’s decision to intervene in the propagation of an October 2020 story in the New York Post about then candidate Joe Biden’s son, Hunter Biden, would be made public. The incident caused a furor at the time, with some Republicans and supporters of former President Donald Trump insinuating that it was proof that social media firms are biased against conservative interests. Some even maintain that the actions of Twitter and Facebook with regard to this particular New York Post story may have had some impact on the outcome of the election, as far-fetched as that might be. 

Today, we’ll hear two voices on the disclosures. The first is David Ingram, who covers tech for NBC News and will walk us through what happened. And the second is Mike Masnick, the editor of the influential site Tech Dirt, who offers his first thoughts on the disclosures, and what they portend for the future of Twitter under Elon Musk.

A transcript of this episode is forthcoming.

The post Scrutinizing “The Twitter Files” appeared first on Tech Policy Press.

Facebook Whistleblower Frances Haugen and WSJ Reporter Jeff Horwitz Reflect One Year On Fri, 02 Dec 2022 15:51:52 +0000

The post Facebook Whistleblower Frances Haugen and WSJ Reporter Jeff Horwitz Reflect One Year On appeared first on Tech Policy Press.


At the Informed conference hosted by the Knight Foundation this week in Miami, Facebook whistleblower Frances Haugen joined Wall Street Journal technology reporter Jeff Horwitz for a conversation reflecting on their relationship before and after the publication of The Facebook Files, the exclusive reports on the tranche of documents Haugen brought out of the company.

The session, titled In Conversation: Frances Haugen and Jeff Horwitz on Tech Whistleblowing, Journalism and the Public Interest, was the first public appearance by the two since the stories first broke in the Journal in September 2021. By October that year, the trove of documents was shared with a range of other news outlets, prompting weeks more of headlines and government hearings over revelations on the company’s handling of election and COVID-19 misinformation, hate speech, polarization, users’ mental health, privileges for elite accounts, and a range of other phenomena.

The conversation touched on Haugen’s motivations, including her fears that tens of millions of peoples’ lives are on the line depending on if and how Facebook (now Meta) chooses to address the flaws in the platforms it operates, particularly in non-English speaking countries. And, it explored how her relationship with Horwitz evolved, and the role he played in putting the disclosure of the documents into a broader historical context.

What follows is a lightly edited transcript of the discussion. Video of the discussion is available here.

Jeff Horwitz:

So yeah, we’ve known each other for a while now. 

Frances Haugen:

It’s like two years.

Jeff Horwitz:

Two years. Two years at this point. And I guess let’s start with before we knew each other. I think you and I both had run into a whole bunch of disillusioned former Facebook employees, or in your case current Facebook employees, and nobody quite took the approach that you had to having qualms about the company. And I’d be really curious to ask just how you got to, who you spoke to before me? How you thought about what your role and responsibilities were, and how your thinking developed before we ever met?

Frances Haugen:

In defense of my coworkers, I reached out to you at a very, very specific moment in time in Facebook’s history, which is the day that they dissolved Civic Integrity. So in the chronology of what happened at Facebook, in the wake of the 2016 election, within days– I actually went back and looked at the news coverage– within days of the election people started calling out things like the Macedonian misinfo farms and the presence of these… not even state actors, these people who had commercial interests for promoting misinformation, and how Facebook had been kind of asleep at the wheel. There were a number of features that, as someone who works on algorithms, I had been watching even from the outside and being like, “Why does this feature exist?” There was a carousel that would pop up under any post you engaged with. It’s like you hit ‘like’ on something, a little carousel post would pop up and you could tell that that carousel had been prioritizing stories based on engagement, because all the stories would be off the rails.

It’d be like you’d click on something about the election, you’d get a post being like, “Pope endorses Donald Trump.” You could tell how these posts have been selected. And in the wake of that election, they form the Civic Integrity team, or they really start building it out. And I saw a lot of people who felt very, very conflicted. There’s an internal tension where you know that you have learned a secret that might affect people’s lives. People might die if you don’t fix this problem. Potentially a lot of people. If you fan Muslim bias in India, there could be a serious mass killing, for example. But at the same time you know, “If I leave the company there’ll be one less good person working on this.” So there’s not quite enough resources to do the job that those people deserve. But you also know if I opt out, there’ll be less people.

And so I reached out to you because I saw that Facebook had kind of lost the faith that they had. When you look at the practice of change management, the field of study, how do organizations change? Individuals struggle enough with the ability to change. When you add us together in concert, our institutional momentum, our inertia becomes almost impossible to alter. Because you have to choose a vanguard and point at them and be like, “Well, leadership’s going to protect them. This is where we’re going, we’re going to follow them.” And the moment that I reached out to you was the moment they dissolved that team.

And so I don’t think it’s so much that my coworkers approach the problem in a different way. I think it’s that I had the privilege of having an MBA. I had the privilege of being like, “Oh interesting. Things have meaningfully just changed, and we need help from outside.”

Jeff Horwitz:

And I think there’s a sense that, I mean, generally whistleblowers, they talk to reporters when everything is all wrapped up. Obviously that didn’t happen. What was your initial thought process about what you wanted to involve a reporter with, what you wanted to talk about, and how did that change?

Frances Haugen:

So I think I did not understand what an anomaly I was psychologically until I left Facebook. So when I left Facebook, I started interacting very actively with, or even when I start interacting with people beyond you, they would comment to me, they’d be like, “Oh my god, you’re so stable.” And this is a compliment I hope none of you ever get. It’s very off putting. When someone’s surprised at how stable you are. Like, “Oh God, was I dressing wrong? Was I acting weird? I don’t understand.” But there’s a real thing that most whistleblowers, they suffer in silence and they suffer alone for a very, very long period of time.

They undergo a huge amount of stress alone, and by the time I reached out to you, I lived with my parents for six months during COVID and I would witness things happening and I would have an outlet to go talk to, where my parents are both professors. My mom is now a priest. I felt like I was not crazy. And I think a lot of whistleblowers, because they go through the whole journey alone, they’re completely frayed by the time they begin interacting with the public. And so I had the advantage of, I could approach it calmly. I could approach it as, “I don’t have to rush this.” And the way I viewed it was Facebook, like Latanya [Sweeney] said, they were acting like the State Department. They were making decisions for history in a way that was isolated, and they would have this blindness about trade-offs.

They would put in constraints that were not real constraints. They would say, “We have no staffing, there’s no way we can have staffing.” You make $140 billion, I think it’s $35 billion of profit a year, $40 billion of profit a year. You don’t actually have constraints. That’s a chosen constraint when you say I can’t have staffing. And I wanted to make sure that history could actually see what had happened at a level where we could have rigor. And so that’s why I took so much time and why I didn’t touch it.

Jeff Horwitz:

And what was the, we’ll call it appeal, of talking with me early on during that period, right? Because you were inside, and from the get go, you were very informed about just ranking and recommendation systems. Obviously that was your background, and you’ve been working on Civic. So where was the value, I guess, in talking to somebody who was a reporter, and had you ever spoken to reporters before on other things?

Frances Haugen:

So Jeff is… So as much as I’m Latanya’s biggest fan girl, I glow over Latanya all the time, because I think she’s amazing. One of the things that I think you cannot get enough credit on, Jeff, is the story of the Facebook disclosures is not a story about teenage mental health. That’s the story of the land in the United States. The story of Facebook’s negligence is about people dying in countries where Facebook is the internet. And you reached out to me, it’s true. You did the affirmative action, but I went and I background checked you. I was familiar with your stories before we ever talked. I didn’t associate with your name, but I read your coverage of India particularly, and the underinvestment Facebook had done and how that was impacting stability there. And I think if you hadn’t shown that you already cared about people’s lives and places that are fragile, like how Facebook treated some of the most vulnerable people in the world, I wouldn’t have been willing to talk to them.

And so part of what I thought– you were so essential in this process– was people who are experts at things don’t understand what is obvious or not obvious to an average person. And I grew so much in my ability to understand the larger picture beyond that specific issue. That’s the issue that made me come forward. The idea that Facebook was the internet for a billion, 2 billion people and that those people got the least safety protections. But you came in and asked me all these questions that helped me understand where the hunger was for information and that allowed me to focus my attention.

Jeff Horwitz:

Did it make it harder for you, in some respects, having somebody who was asking you questions about what mattered and why? 

Frances Haugen:

No, no, no, no. I love the Socratic method. I think the way we learn is in tension and I think the fact that you were present and you reminded me that it mattered, right?

We have work crises. Right after they dissolved Civic Integrity, I got a new boss. I had to go through months of ‘how do you reestablish credibility,’ or ‘how do you learn how to speak a different language effectively because you’re in a new part of the company.’ And I’m so grateful you provided a drumbeat, you knew this was a historic moment and you reminded me of what I was thinking about and feeling in the moment when that transition happened. Because I could have lost the thread. I could have let my life… Our lives are… I’m not a martyr. I wasn’t going to go… this wasn’t my whole purpose, and I’m really glad that you provided an accountability partner and a thought partner in the whole process.

Jeff Horwitz:

Yeah, I think that’s the thing just about this whole process for me was… I had to reflect a bit on how, for me, this was all upside, right? There was no way this really goes south for me personally. The worst case was it was going to be some wasted effort. And I think I got… Actually I was– early on, the first few months– I was a little frustrated with you sometimes because you were hard to reach, or there’d be periods of time you’d just go dark, and I think I probably didn’t appreciate the level of stress that this required of you. And I guess I’d be… Tell me about beginning to think about what you were going to take out of the company, particularly given that a lot of this stuff wasn’t your work product initially.

Frances Haugen:

Yeah, well I think in the early days, part of what was so hard about it was… To give people context, before I joined Facebook. So the job I joined Facebook for was to work on civic misinformation.

And I have lots of friends who are very, very senior security consultants. These are people who go in and are like, “Oh this is how your phone’s compromised. I know you thought your phone was fine, actually it’s slightly slower because you have all the stuff running on it.” And I asked all of them, what should I be asking about before I take this job? Because if you’re going to work on civic misinformation, you are now one of the most… Even if your team is small, you are doing a national security role. China has an interest in your phone now. Russia, Iran, Turkey have an interest in your phone. And everyone told me you should assume all your devices are compromised. Your personal devices, your work devices, you should assume you’re being watched. And so imagine I’m interacting with Jeff and at this point I was working on threat intelligence. I was working on counter espionage, right? Think about the irony there.

Jeff Horwitz:

With the FBI guys. Former FBI guys.

Frances Haugen:

Former FBI guys who are wonderful, some of my favorite human beings, very funny people. Every time I message Jeff I’m like, “Is someone listening to our conversation?” And so it took a period of time for me to get over that anxiety. And one of the things I’m very, very grateful for Jeff, was there were individual moments where I almost just abandoned the project because I was like, will this be the thing where China will drop an anonymous note to Facebook saying, ‘Your employee’s doing this” just so it causes randomness inside of the company? You just don’t even know how to think about those things. And I thought you did a really good job of helping me just think through… I’m a threat modeler…. sit there and model the threat with me. So I didn’t go through that process alone. I’m very grateful for that.

Jeff Horwitz:

Now, of course there were limits to what I could do. So I could never tell you that I wanted you to do a particular thing. That would’ve been inappropriate from the Wall Street Journal’s point of view and a little presumptuous.

And then I also couldn’t tell you that it was going to be alright.

Frances Haugen:


Jeff Horwitz:

And I guess I’d be curious about your sense of the risk that you were personally at during this period of time. I think you made it a lot easier for me because your position was that if you ended up being caught doing this after the fact, c’est la vie, you would have no regrets. But I guess I’d be curious about what your expectation was and also what your thoughts were about your privacy and anonymity, right? Because I think that changed over time.

Frances Haugen:

Well there’s two questions. So we’ll walk through each one, one-by-one. One of the things that I am most proud of that came out of everything I did is that Europe passed whistleblower laws. So after Enron, after WorldCom, those two whistleblowers, a big part of what they did and the advocacy they did afterwards was they got whistleblower laws that applied to me.

The reason why Europe passed whistleblower laws was– the big thing I emphasized in my testimony– was this is not an anomaly. More and more of our economy will be run by opaque systems where we will not even get a chance to know that we should be asking questions because it’ll be hidden behind a black box. It’ll be in a data center, it’ll be on a chip. The only people who will know the relevant questions will be the people inside the company. But the problem with a lot of those systems, they were very, very intimately important for our lives. But each individual case will probably not have the stakes that I viewed about Facebook. So the way I viewed Facebook was, I still earnestly believe that there are 10 or 20 million lives on the line in the next 20 years. That the Facebook disclosures were not about teenage mental health, they are about people dying in Africa because Facebook is the internet, and will be the internet for the next five to 10 to 20 years in Southeast Asia, in South America.

Places that Facebook went in and bought the right to the internet through things like Free Basics. And so for me, I did what I did because I knew that if I didn’t do something, I didn’t talk to you or someone else, if I didn’t get these documents out and those people died, that I would never be able to forgive myself. And so the stakes for me were, whatever, they put me in jail for 10 years? I’ll sleep at night for 10 years. And for most people, I don’t think, it’s not going to be that level of stakes.

Jeff Horwitz:

And I think one of the things that happened… so just Puerto Rico. Frances, after going on two years of Facebook, found that due to some quirks in human resources rules at Facebook, she was going to have to leave the company in a month or return from the place, the US territory that you just moved to.

Frances Haugen:

I really did.

Jeff Horwitz:

And I was invited to come out and make a little visit. And I think, let’s put it this way, I did not have a month-long ticket booked initially. I guess, I’d be curious about what changed when you started exploring Facebook’s systems in a way that went beyond what you ran into in the normal course of affairs for your job.

Frances Haugen:

Well I think it’s also a thing of… so like I said earlier, my lens on this project was… So I’m a Cold War Studies minor. The one thing that most people don’t know is that a huge fraction of everything we know about the Soviet Union, the actual ‘how’ the Soviet Union functioned, was because a single academic at Hoover and Stanford went into the Soviet Union and scanned the archives after the wall fell. He was like, “We don’t know how long this window’s open. We’re going to go in there, we’re going to microfiche everything we possibly can.” A single person was like, “History. If we want to know what happened in history, if we want to be able to prevent things, if we want to be able to not read tea leaves through an organization that lied to us continuously, we have to do this now.” And the window really was small, it was three or four years. They started publishing papers, Russia started feeling embarrassed about some stuff that came out and they locked the best parts about the archive back down.

And so I’m really grateful that you were like, “We have a historic opportunity here. If we want to make sure the history gets written, you have three more weeks.” So the situation was, I’m recovering from being paralyzed beneath my knees and I still have a lot of pain, especially when I’m cold. And during COVID, we can only socialize outdoors in San Francisco, which is a fog-filled frozen freezer. So I went to Puerto Rico, and I found out in the process of trying to actually formalize being in Puerto Rico that you cannot live in a territory, be it British or American territory, and work at Facebook. And so I informed Jeff out of the blue, I’m like, “Hey, by the way, I have to make a decision next week. Am I leaving Facebook?” And this suddenly made it a very discreet vision of, ‘this is the last window, this is the last time we get to ask questions.’

And I’m so grateful that you put in a huge amount of effort to just be there and make sure that we asked as many questions as we could so history got to have some kind of record for it.

Jeff Horwitz:

Yeah, I think something we talked about a lot was that this wasn’t replicable. That after a certain point it became apparent, right. And also I think to some degree that’s when your confidentiality went down the toilet too. You were getting into enemy territory inside what is a forensically very, very precise system in terms of tracking what you were doing. I mean, I think we were both baffled that, candidly, Facebook was as bad at observing anomalies in employee behavior on its systems as… I mean what, you being in a document from 2019 in which Instagram-

Frances Haugen:


Jeff Horwitz:

Yeah, yeah. In which the Instagram leadership is presenting a slideshow to Mark. You didn’t work on Instagram, you didn’t work on those issues. You were expecting to get caught basically every morning. I remember when the internet went down briefly, you were like “Oh crap, the game’s up. We’re done.” And I think you had to… I guess to some degree… Obviously I was not requesting you grab things, but did it feel like I was pushing you, and was that a good or a bad thing? Because I was pushing you.

Frances Haugen:

So, I think there’s two issues there. So one is unquestionably, once I really got a clear vision of what the stakes were… so I think it’s always important for us to think about what is the process of knowledge formation? Where does a thesis come from? We start with feelings, we feel that something’s off and then we gather evidence and we begin to have a thesis on what is off and then we become confident in our thesis and then you actually, once you communicate to someone else, you’re like, “Oh, it’s been challenged. I feel this is good.” And I don’t know, at some point in that process, maybe a couple of months in, I started understanding what the scope of what we were doing was, and the last couple of months was incredibly hard for me because I both felt we were not done, we didn’t have an adequate portrait for history. But I also knew if at any point I got caught, no one would ever get to do it again.

And I think in that last three weeks, part of why I’m so grateful… I didn’t really understand the boundaries of what were the most pressing issues to the public, and I think as you ask questions and paint an even fuller portrait, I don’t think anyone who has empathy wouldn’t feel like you could feel the heat of why this needed to be done. And so I’m very grateful that you helped provide context for me of what history needed to know. You were the first draft of history as a good journalist should be.

Jeff Horwitz:

And then let’s talk about working with… So after Puerto Rico is done and things start getting… I’m going to be doing my work with my team of folks at the Wall Street Journal, we’re spinning up very slowly with a large organization, which is what we do, and I recall you’re doing a lot of things on the legal and prep side and obviously you made the decision that you were going to go public pretty early on.

Frances Haugen:

I don’t think it was pretty early on. We didn’t make that decision fully until August.

Jeff Horwitz:

Really? Good.

Frances Haugen:

So I left in May and so for context, part of why it is so detailed was I wanted it to stand on its own. It might be surprising given my public persona as a whistleblower, but I’ve had maybe two birthday parties in 20 years. I don’t really like being the center of retention. I eloped the first time I got married. I tried really hard to have a wedding the second time. We had a date, we sent out invites, we still eloped. And the thing my lawyers were very, very clear about was they were like, “It is delusional for you to think that Facebook won’t out you.” If Facebook can control the moment that you get introduced to the public and how you get framed, they know who it is. They can see all the interactions. As soon as those stories come out, as soon as they start asking for confirmation or a chance to respond, they’re going to figure out it’s you.”

So I think the thing that was hardest for me was, to be fair, you said you would start publishing within a month, six weeks.

Jeff Horwitz:

We were very slow at the Wall Street Journal.

Frances Haugen:

And it took you four months.

Jeff Horwitz:

It’s true.

Frances Haugen:

Just in fairness. And so I think it’s one of these things where the amount of education that technologists have about the process of journalism is very, very thin. One of the most valuable things the journalism community can do is provide a lot more education on the process of journalism, particularly to tech employees. Targeted ads are very cheap especially, or putting on YouTube videos, that kind of thing.

Jeff Horwitz:

What was the rep of reporters inside Facebook, by the way?

Frances Haugen:

Great question. So I remember inside of Facebook there being large public events where people, executives would talk about how selfish reporters are, which if you know reporters, reporters might have certain stereotypical character flaws. But being selfish people is, I don’t think, one of them. They’d be, “They just want the fame and glory. That’s all they’re doing. You’re being used.” And even the idea of what is the role of journalism in a democracy, people need to understand. I am on the stage because I have one of the lightest weight engineering degrees you could get in the United States. Super flat out. I went to an experimental college, I was in the first year, you had the least requirements for an engineering degree of anywhere in the United States or probably the world, and I got to take more humanities classes as a result. Every elective class you take beyond your GenEd requirements at a US university decreases the chance that you will get hired at Google, period. Google even knew this. They knew that even Bioinformatics majors had trouble getting hired compared to CS majors.

There is a huge gap where people don’t understand how important it is to be a human being first and a citizen second and employee third. They actively indoctrinate you the other way, to say your loyalty is to your coworkers. And I think a thing that I want to be sensitive on is part of why I was able to do what I did was because Facebook had a culture of openness. And I’m sure there are people inside the company who say, “You ruined a golden age. We had the ability to collaborate and now we don’t, everything’s locked down.” And I think the flip side of that, that I would say-

Jeff Horwitz:

They were saying that before you did that too, though. It was already over.

Frances Haugen:

Sure, yeah. But I think there’s a thing where companies that are willing to live in a transparent world will be more successful. Employees will want to work with them because they don’t have to lie. They’ll be able to collaborate. A company that can be open, which is a company that can’t be open, the one that can be open will be able to succeed.

Jeff Horwitz:

So let’s talk about simultaneously working with the press and with a lawyer and not necessarily entirely, not as a coordinated process.

I recall a lot of people telling you that you were going to go to jail. I recall Edward Snowden and Reality Winner and Chelsea Manning’s name being thrown out.

Frances Haugen:

Yeah. Because Facebook is a nation state.

Jeff Horwitz:

Now I am not a lawyer so I can’t tell you that that’s not going to happen but I can recall thinking to myself, what the hell is wrong with these people? She doesn’t have a security clearance and there was no track record of this happening.

Frances Haugen:

There’s no Espionage Act going on.

Jeff Horwitz:

And then I also recall there being some disputes over who was going to represent you in particular fashions. And I think that process left me feeling… I mean look, obviously in terms of you as an advocate, I can’t do that. I can’t even root for you as an advocate, and as a source, of course, though I cared a lot about you coming out of this as well. And I would be curious what… and I pride myself on taking care of people who are sources, but I felt like I could not really help much and I felt like you had… I guess I’d be curious what resources you found were available to you, and what didn’t work, and what you would like to see in the future for people who might play a similar role to you.

Frances Haugen:

I talked before about the idea that we should not treat me as an anomaly. We should treat me like a first instance in a pattern that we need many more of, because as our economy becomes more opaque, we need people on the inside to come forward. We need to think about what is that pipeline of education, everything from… There are engineering programs in the United States, like UCLA’s, where they don’t talk about whistleblowing in their ethics program. And that’s because one of the board members vetoed it. I won’t name names because I’m not a shamer, but we need to have everything from education– colleges– all the way through current employees.

And then we need infrastructure for supporting those whistleblowers, because right now there are a couple of… There are a cluster of SEC whistleblower firms, but there is no way for someone who has no experience with this to differentiate among them. There are firms with very obvious names like Whistleblower Aid, where Whistleblower Aid I think plays a very critical role for national security whistleblowers. I don’t think, necessarily, I was a good fit for them, but because I didn’t know anything about how to even evaluate lawyers, I got thrown in the middle and there were parties that tried to step in and say, “You need a different kind of set of support.” And because I didn’t even know how to evaluate between these parties and I didn’t know how to assess their interests and things like that, I got really left alone and things happened where the process was substantially… We came very close to having it being disastrous.

Things like, I didn’t get media trained until five days before I filmed 60 Minutes, which was two days before your first article came out. And so I think there’s a deep, deep need for infrastructure where even having a centralized webpage saying here are the strengths of these different whistleblowing firms. Even that level of context is missing right now. And it’s serious because you have people who have put themselves in very vulnerable positions who end up being in places where they don’t know whose interests are being represented or whether or not even with a lawyer that’s safe. And so I feel kind of guilty that people are modeling their whistleblows on mine, and I don’t necessarily think that was the ideal path.

Jeff Horwitz:

I think things got tense between us. It definitely got tense between us. I think partly because I had some concerns about how, just on the source protection level, it seemed like some things were a little weird. And then I also think you had your concerns with how the Wall Street Journal as an institution was approaching it and I mean, you gave us permission to publish the documents in full had we so chosen. That was not something the Journal wished to do. And I’d be curious about where you felt like journalism was not– or at least a particular outfit and a particular partnership– was not going to be able to accomplish what you wanted to accomplish. So, what led you to think that?

Frances Haugen:

So I think there’s a couple different puzzle pieces there. So one, I think you had very legitimate concerns about the people who were around me. When I say things like, “We desperately need some infrastructure that is at least a little bit of Consumer Reports on legal representation,” it’s really, really vitally important. Because there was a period of time where the only support I had… I don’t want to say that you had pure intentions. You have your own conflicts of interest, but at the same time you also were seeing really outlier behavior among some of the people who were around me. And I felt really bad for you that due to– I don’t want to use the ‘negligence’ word, I think that’s too aggressive– but there were definitely things that happened where you were harmed because people around me acted in inappropriate ways.

Jeff Horwitz:

It worked out fine for me in the end.

Frances Haugen:

It worked out fine. But the secondary thing, so that’s one thing I want to acknowledge and I think that did cause tension between us because those actions, particularly things like 60 Minutes published some of our SEC disclosures that closely modeled reporting that Jeff was going to do.

Jeff Horwitz:

Which was cool. That was good. Thank you. The Wall Street Journal does not do TV very well.

Frances Haugen:

No, because… No, but I never gave permission to publish it and it meant that those issues didn’t get introduced to the public with the level of thoroughness. I love the Wall Street Journal

Jeff Horwitz:

Oh you’re talking about the SEC cover letters.

Frances Haugen:

Yeah. And it meant that things like random bloggers wrote really cursory things on me instead of having them broken by the Wall Street Journal, which is why I tapped them, and so there were things like that happening.

But the secondary issue is this question of why did we move to the consortium model? And the reason we moved to consortium model was, one, I had brought up since the beginning that I wanted to have non-English sources have access to the documents and have an international consortium as well, and part of why those plans didn’t get as far along before the coverage came out was my partner slipped in a shower in Vegas, under circumstances which we will not go into, and got a frontal lobe bleed in the middle of July.

So literally, I have a conversation with Jeff a week before this where I’m like, “Jeff, we need to start discussing the non-English consortium because I want to have the Wall Street Journal run an English exclusive around an international consortium.” And then I spend the next month with someone who is belligerent because he has a frontal lobe bleed and is irrational. It’s one of the hardest points in my life. And so that didn’t get as much inertia, or we couldn’t even start those conversations until right before their incursion. But once I came out, the press team that was working with me, they were like, “We have this real problem, which is there are all these endless sources that feel slighted that they didn’t get the crowns jewels, and if you don’t allow them to also be involved in this process, they’re going to make you the story.”

Jeff Horwitz:

I always thought that was… but yes.

Frances Haugen:

I understand. But that’s why we expanded.

Jeff Horwitz:

No, no, it’s a question about process. I mean, I remember you had… Your thought was that basically if the US government, if this was a chance you were going to lay out the case and lay out a potential way forward for data transparency as well as for rethinking content moderation and getting past the, “Take down the bad stuff,” arguments which-

Frances Haugen:

Doesn’t scale.

Jeff Horwitz:

Yeah, dead end.

Frances Haugen:

Doesn’t work and doesn’t scale.

Jeff Horwitz:

The current progress of US legislation I think, I’ve talked to you obviously, we are both into data transparency and there are some people in this audience who are involved in that effort. I think we’re both pretty bullish on that as a thing that would be useful for me doing my job and you as an advocate. Are you disappointed, though, that nothing else has happened in the US?

Frances Haugen:

So one of the things people often ask me all these questions about is why was Europe able to pass the Digital Services Act, which is a generational law, right? It’s on par with Section 230 in terms of changing the incentives for the internet. Why was Europe able to do that while the United States hasn’t presented any legislation to do that? When someone yesterday said PATA is coming very close to passing, oh that would be amazing. Nate [Persily], I know you’re in the audience, I’m rooting for you. But Europe was using a fundamentally different product.

Facebook overwhelmingly spent their safety budget on English in the United States because they knew they were an American company that could regulate with the United States, and Europe is a much, much more linguistically diverse place where I’m pretty sure for many languages in Europe they had almost no moderators, right? I had a Norwegian journalist say, “We gave Facebook a thousand posts promoting suicide in Norwegian. We even reported those and none of them got taken down.” And so Europe was coming up with a different set of incentives, and so I’m not surprised this stuff hasn’t passed in the United States because Facebook has spent hundreds of millions of dollars telling people they have to choose between content moderation, freedom of speech and safety.

It’s like, “Oh I know all these things are bad, but this is your only way to solve it.” And I hope that when the Facebook archive goes live– I’m rooting for you also, Latanya– that we can see that there are a huge suite of other approaches and other tools that we can use to make the system safer, but those tools will only get used if we have transparency because the business incentives do not incentivize these issues.

Jeff Horwitz:

Tell me about your new nonprofit and what you are hoping to accomplish in the last year and two months of public advocacy? Where are you trying to push things forward?

Frances Haugen:

So, as a product manager, one of the most important techniques or approaches that I learned at Facebook is that Facebook is a product culture that really believes in clearly articulating what is the problem you’re trying to solve. So you can show me this cool thing you want to build, but unless we can agree on ‘what is the problem we need to solve,’ you may not actually have a good solution. And when I look at Facebook, I don’t see the problem to be solved as there are nefarious hackers inside the company or that there are specific things with these technologies. I see it as there are not enough people sitting at the table that get to weigh in on what the problems are or how those problems should be solved.

And right now there are hundreds of people in the world who really understand how those algorithmic systems work. What the choices are that they need, what the trade offs are, what the consequences are; and that I think we need a million people involved who meaningfully understand and who could contribute to a conversation on how we should approach these systems. And so we’re focusing on what I call capacity building and the ecosystem of accountability. So that’s the thousand academics that Latanya talked about. That’s making sure legislative aides in Congress understand what’s possible and what the trade-offs are. That’s civil litigators who understand what it means to cut a corner. That means concerned citizens.

Where is Mothers Against Drunk Driving for social media? We need that whole ecosystem. We need investors to understand what long term profit means versus short-term profit. And we’re working on tools around how do you actually bring that million people to the table?

Jeff Horwitz:

So I think something that I noticed that changed after this, and I think partly because I was involved with it, that probably made it a little bit easier but I think probably other reporters in this room have learned as well. There was something really useful about you, just the NDA culture around Facebook.

It lost its hold. A lot of people, a lot of former employees were willing to talk. I mean, partly it helped that we had the documents. Obviously the stuff that will go out will be redacted, but we could call up the former employees and say, “Well, we’ve seen your work product already. Sorry.” But I guess I’m wondering what role there is for former employees going forward, and what you would want to see former employees deal with? There’s a way to incorporate more of them into this discussion. Because I think you got a lot of this too, which is, “Well, screw her. She took blood money for years. How could she possibly be a critic, boohoo?” I mean you got a lot of that early on and I think there’s been kind of a tendency to have us versus them. I would be curious about what you would think.

Frances Haugen:

So one of the big projects we want to work on is a simulated social network, and the intention of that is right now we don’t have any way of actually teaching these things in schools. So one thing that the Twitter whistleblower highlighted was that basically all the safety functions inside Twitter were critically understaffed and that’s because every person who does my job in industry was trained in industry. We’re going to Mars with SpaceX because they can hire Ph.D. aeronautical engineers, but the public trained them for years, 12 years, 10 years. I think there is a need for those people and their knowledge. Remember that knowledge is going to evaporate. It’s transient. Facebook is cutting the teams that actually understand their systems. TikTok probably doesn’t invest in the same way on those systems. They’re not as profitable as Facebook is. Twitter cut their teams. We need to figure out ways for those people to begin capturing their knowledge because it is unlikely once the next wave of social networks comes through encrypted end-to-end social networks, we will never get to ask those questions again and that’s the other reason why I capture this.

Because we have to get this knowledge in a set form and reproduce it and teach it because we will not get to learn it once we’re working with end-to-end encrypted social networks.

Jeff Horwitz:

And let’s see.

Frances Haugen:

We should be respectful of time.

Jeff Horwitz:

Yeah, Are we… Okay. No, no. It just switched from saying, “Please ad lib for a few more minutes” to “Please wrap.” So boom, we’re done here. See you guys later. Bye.

Frances Haugen:

Thank you guys.

The post Facebook Whistleblower Frances Haugen and WSJ Reporter Jeff Horwitz Reflect One Year On appeared first on Tech Policy Press.

Considering KOSA: A Bill to Protect Children from Online Harms Thu, 01 Dec 2022 15:54:11 +0000

The post Considering KOSA: A Bill to Protect Children from Online Harms appeared first on Tech Policy Press.

Tim Bernard recently completed an MBA at Cornell Tech, focusing on tech policy and trust & safety issues. He previously led the content moderation team at Seeking Alpha, and worked in various capacities in the education sector.

Senator Richard Blumenthal (D-CT) and Senator Marsha Blackburn (R-TN), August 2021. Source

On November 15, Senator Richard Blumenthal (D-CT) and Senator Marsha Blackburn (R-TN), met with a group of parents whose children were harmed after exposure to social media content. The parents shared their stories and lobbied the senators to take prompt legislative action. 

Sens. Blumenthal and Blackburn are Chair and Ranking Member of the Senate Commerce, Science, and Transportation Subcommittee on Consumer Protection, Product Safety, and Data Security, and sponsors of the Kids Online Safety Act (S.3663). Introduced earlier this year, this proposed legislation passed out of committee in July with unanimous support, and its proponents hope that it will be voted on during the present lame duck session, most likely appended to a defense or spending bill. But what would the Kids Online Safety Act (KOSA) do?

What Is In the Bill?

Before delving into the requirements of KOSA, it is important to note two definitions (§2) that underlie the scope of its impact should it become law:

  • A minor is defined as a user aged 16 or below
  • An internet platform appears to be in scope of the legislation if it is “commercial” and “is used, or is reasonably likely to be used, by a minor”. 

Reminiscent of the California Age-Appropriate Design Code Act and the UK’s Online Safety Bill, KOSA (§3) establishes a duty of care for platforms “to act in the best interests of” users who are minors. These “best interests” are defined as prevention of harm from content, such as self-harm material, bullying, sexual expolitation, advertising of goods and services–such as gambling and alcohol–that are illegal for minors, as well as from the design of the platform itself, including “addiction-like behavior” and deceptive marketing practices (also known as dark patterns).

The other most substantive section of the bill (§4) requires platforms to institute “safeguards” to protect minors from harm. Interestingly, most of these are intended to be in the form of settings that can be adjusted by the minor user or their parent or legal guardian. These controls must default to the most restrictive setting if the platform “knows or reasonably believes” that the user is a minor. The settings are to control findability, limiting time-on-platform and disabling features that tend to extend this, geolocation, and use of personal data in recommendation engines, as well as deletion of account and data.

Additionally, this section requires giving parents additional control over account settings (including restricting payments) and access to the user’s activity on the platform that may relate to online harms, as well as tracking time spent. If these parental tools are in use, the platform must alert the minor to this. Platforms must also provide minors and parents with a functioning reporting mechanism for the harms designated in the bill. 

The disclosures mandated in §5 cover the aspects of platform policies and affordances relevant to the online harm categories the bill has established, and “whether the covered platform … pose[s] any heightened risks of harm to a minor.” This “heightened risks” assessment portion of the notice must be acknowledged before initial use (because even kids love to read and accept EULAs!), and the notice must also be sent to a parent if the platform knows or “reasonably believes” that the user is a minor, though the acknowledgment can come from either party.

Public reports based on third-party audits are required by §6. These cover the basics: overview of platform usage, a risk assessment, and details of actions taken to mitigate risks. However, the platforms must also conduct their own research about minors’ experience of the platform, solicit input from parents and experts, and account for outside research in the product space.

The next few sections of the bill deal with various research-related topics:

  • §7 creates a system to approve qualifying researchers to get access to platform data in order to study online harms to minors.
  • §8 regulates market research that platforms can conduct on minors, requiring parental consent and instructing the FTC to draw up guidelines.
  • §9 directs NIST to research options for age verification that are effective, feasible, and not unduly intrusive.

Two avenues of enforcement are established in §10: via the FTC’s authority to regulate any “unfair or deceptive act or practice,” and by civil suits brought by State Attorneys General on behalf of residents of their states. 

Lastly, an advisory Kids Online Safety Council is to be convened (§11), consisting of a very broad range of stakeholders.

Support for KOSA

Concerned parents, like those who met with Sens. Blumenthal and Blackburn and some other child safety and mental health advocates, support this bill. Their arguments typically consist of recounting the harms suffered by children, and citing various studies that suggest that use of social media and interactions that occur on social media can result in harm. 

It is very difficult to draw direct lines between usage of online platforms and harm at scale, but there are indications that there does at least seem to be some correlation between mental health problems and intensive social media usage in minors. (See Social Media and Mental Health: A Collaborative Review by Jonathan Haidt and Jean Twenge.) Additionally, not all platforms have always taken even elementary steps to protect children from harm. For example, the video chat platform Omegle has facilitated numerous instances of child sexual exploitation, and has been criticized—and sued—for its system design (it instituted higher age limits and a moderated mode after some of these cases, though these measures are trivally easy to circumvent).

Criticism of KOSA

Much of the criticism of the bill has come from civil liberties organizations and those supporting LGBTQ+ minors. Points of contention include:

  • The duty of care provision may lead to overfiltering, which may prevent even 15- and 16-olds from accessing appropriate sex education materials, and much-needed resources for LGBTQ+ teens. Filtering, critics argue, is also of dubious value as teens are adept at working around keyword restrictions as is common on TikTok.
  • For those suffering from domestic or parental abuse, the internet gives access to important sources of support, both person-to-person communications and information. This bill would compromise these resources, as well as basic privacy for typical teenagers.
  • Many services, including essential educational tools, make use of individual-specific algorithmic ranking, and so use of the control opting-out of this aspect of platform functionality would make these platforms unusable.
  • The “reasonably likely to be used by minors” benchmark gives the bill an incredibly wide scope. Platforms are therefore incentivised to make use of age verification services (with associated privacy concerns) and either close the platform to under-17s to avoid being in scope and escape the burdens such as notice, research and reporting all together, or to give minors a radically pared-down service to avoid a complete rebuild.
  • Another danger of this standard is that services would retain more personal information for all users in the attempt to either verify ages or to have usage information available for review by parents (in compliance with §4(b)(2)(e)).
  • Additionally, it is unclear how far into the stack this scope extends. Is AWS in scope as minors access applications and content there? Cloudflare or Akamai, as they direct minors to resources?
  • At least one State Attorney General has been suspected of taking action against social media companies for political reasons in the past. When the topic is content that children are allowed to access, the temptation for culture war-attuned AGs is all too evident.
  • Finally, how would this all work? How does PBS Kids know when a parent hands their tablet to a child for her to watch Daniel Tiger? How will Google know how to send the parent a notice before their child first uses Search on a computer at school?

Another Approach

Several of these critiques reflect either a discomfort or a practicality problem with the age definition of a minor in KOSA. 17 is not a typical age of majority in the US, and not in line with the Children’s Online Privacy Protection Rule (COPPA), perhaps the most comparable existing federal law, that applies to those aged under 13, or the California Age-Appropriate Design Code Act, which establishes discrete age categories that go up to under-18.

Indeed, KOSA’s list of safeguards includes several measures, many in accord with Safety by Design and Privacy by Design principles, that would doubtless be very popular with users of all ages. Who would not want more control over their data, how ranking algorithms use it, and who can contact them? Wouldn’t all users benefit from tools to limit overuse and avoid manipulation by dark patterns? And why should we limit qualified independent researcher access to research about only those harms that pertain to minors?

Two bills already under congressional consideration would take care of some of these priorities for all users: The ADPPA, which would provide the comprehensive data privacy framework that has been sorely lacking in the US for far too long, and PATA, providing researcher access to the social media platforms that are most of concern without reference to age. As journalist Karina Montoya suggested in Tech Policy Press, these bills are well-designed, have bipartisan support, and could transform US tech regulation for the better. 

Rather than rushing through a bill that impacts a very large proportion of the websites and applications on the Internet and is opposed by serious, independent organizations that care about the wellbeing of vulnerable minors, it may be advisable for Congress to focus on enacting ADPPA and PATA, while carefully observing how the California Age-Appropriate Design Code Act changes the services under its aegis for the better, and perhaps for the worse.

The post Considering KOSA: A Bill to Protect Children from Online Harms appeared first on Tech Policy Press.

November 2022 U.S. Tech Policy Roundup Thu, 01 Dec 2022 13:55:05 +0000

The post November 2022 U.S. Tech Policy Roundup appeared first on Tech Policy Press.

Kennedy Patlan and Rachel Lau are associates at Freedman Consulting, LLC, where they work with leading public interest foundations and nonprofits on technology policy issues. 

The U.S. Capitol building in Washington DC.

The results of November’s United States midterm elections will shape tech policy both in the lame duck session and for the coming two years as a new political landscape comes into focus. As the GOP prepares to take control of the House in January and Democrats retain power in the Senate and the White House, activists are mounting a push to pass antitrust legislation in the lame duck session. The midterms also carry significant implications for a wide range of tech-related issues, including high-speed-internet funding and online privacy. 

In industry, major layoffs at large tech companies like Facebook, Amazon, Netflix, and Twitter made the news this month as the companies downsized to cut costs. Following Elon Musk’s acquisition of Twitter in October, Musk has fired thousands of employees at the company, including cuts to the platform’s trust and safety, curation, and public policy teams. Twitter’s ad revenue has crashed since the takeover, and Musk claimed that Apple threatened to remove Twitter from the App Store. The platform is also showing signs of larger technical problems, while the new paid function allowing Twitter users to buy verified checkmarks has exacerbated the platform’s existing problems with misinformation. Although Musk has posted preliminary rules on Twitter’s new misinformation and content policies, the company has not announced how it will address false claims made by users and recently stopped enforcing its policy against COVID misinformation. With Twitter’s content moderation system significantly weakened, activists argue that government entities should set content moderation guardrails before Twitter’s misinformation problem impacts upcoming elections.

The below analysis is based on, where we maintain a comprehensive database of legislation and other public policy proposals related to platforms, artificial intelligence, and relevant tech policy issues. Read on to learn more about November U.S. tech policy highlights from the White House, Congress, and beyond.

Final Efforts to Enact Tech Policy Legislation Before a New Congress Arrives 

  • Summary: As the 117th Congress comes to a close, legislators, advocacy groups, and constituents have all been pushing Congress to act on a range of tech legislation. This legislation includes antitrust bills, like the American Innovation and Choice Online Act (S. 2992) and the Open App Markets Act (S.2710); and privacy bills, like the American Data Privacy and Protection Act (H.R. 8152), the Kids Online Safety Act (S.663), and the Children and Teens Online Privacy Protection Act (S. 1628). Read more below for the latest advocacy updates on each. 


  • The American Innovation and Choice Online Act (S. 2992) and the Open App Markets Act (S.2710): In mid-November, a group of smaller tech companies including Neeva and DuckDuckGo launched an advertising campaign in support of the AICOA, urging Congress to pass the bill before the end of the year. In a letter pushing Senate Majority Leader Chuck Schumer (D-NY) to act on both the AICOA and the Open Markets Act, more than 40 public interest organizations also encouraged top lawmakers to pass the bills before the end of this lame duck session. Meanwhile, tech executives have also been meeting with White House officials to discuss antitrust legislation, while those officials are also engaged with Congressional leadership, expressing support for passing the two antitrust bills during the lame duck session. White House Press Secretary Karine Jean-Pierre commented, “We are very committed to moving ambitious tech antitrust legislation and we’re stepping up engagement during the lame duck on the President’s agenda across the board, including antitrust. There’s a bipartisan support for these antitrust bills and no reason why Congress can’t act before the end of the year.” Brookings Visiting Fellow Bill Baer wrote on the current state of play: “If Senator Klobuchar (D-MN) and Representative [David] Cicilline (D-RI) can rebuild their bipartisan coalition in the next few weeks with the help of an emboldened President Biden, those bills could squeak through before year’s end.” The legislative calendar is tight, however, and prospects for a vote remain uncertain.
  • Merger Filing Fee Modernization Act (H.R. 3843 / S.228 / S.1260): Both chambers have passed versions of this legislation, most recently the House in October. To become law, both chambers must agree on bill text and pass the same legislation. After this month’s issues with Ticketmaster concert sales raising concerns around lack of competition in the ticketing industry, Senator Klobuchar has expressed hope that fans can help Congress move antitrust actions forward. 

Privacy-Related Legislation:

  • The American Data Privacy and Protection Act (ADPPA, H.R. 8152): Co-sponsors involved in the push to pass the ADPPA are eager to get the bill over the finish line. A spokesman for Representative Frank Pallone (D-NJ), the House and Energy Commerce Committee Chair, said that Pallone “is focused on passing the comprehensive American Data Privacy and Protection Act before the end of the year.” David Morar, a policy fellow at the Open Technology Institute also wrote a piece for Tech Policy Press, explaining why this lame duck session is the best opportunity to get the ADPPA passed. 
  • The Children and Teen’s Online Privacy Protection Act (S. 1628) and Kids Online Safety Act (S.663): This month, Congress was confronted with meetings and letters from supporters and opponents of the Kids Online Safety Act (KOSA) alike. In early November, Senator Maria Cantwell (D-WA) and Senator Schumer each received letters from online children safety advocates urging them to advance the Kids Online Safety Act. In mid-November, parents whose children’s deaths were linked to social media met with policymakers including Senators Richard Blumenthal (D-CT) and Marsha Blackburn (R-TN), encouraging them to pass bills focused on greater online protections for children and teens. 95 groups who oppose KOSA sent a letter to Senator Schumer and Senate Commerce leaders including Senator Cantwell, and Senator Roger Wicker (R-MS), urging them to “not move forward” on the Kids Online Safety Act during the last months of the 117th session. Groups that signed the letter expressed concerns about the implications the bill may have on children’s rights to access content, potential privacy implications for LGBTQ+ and other vulnerable children at home, and the possibility to “counter-intuitively encourage platforms to collect more personal information about all users.”
  • What We’re Reading: The Hill reported on the history of the AICOA legislation process and the bill’s roadblocks in Congress. Bloomberg Law covered predictions for White House and Congressional approaches to tech policy in the 118th Congress. 

Looking Ahead to the 118th Congress

  • Summary: With the Democrats surprisingly retaining control of the Senate and the Republicans taking the House, the upcoming lame duck session and split Congress will have implications for tech nominees and policy. The Democrats’ continued control of the Senate is favorable for nominees who are awaiting approval, including Gigi Sohn, who has yet to be confirmed as FCC commissioner; and Democratic FCC Commissioner Geoffrey Starks, who is up for reconfirmation at the end of 2023. On the other hand, according to NPR, Rep. Jim Jordan (R-OH), who will likely become chair of the House Judiciary Committee in January, and Rep. Cathy McMorris Rodgers (R-WA), who is on track to chair the House Energy and Commerce Committee, have historically pushed for content moderation policies that limit the ability of platforms to moderate speech. House Republicans have indicated that passing legislation targeting large tech companies will be a top priority. Rep. McMorris Rodgers’s Big Tech, Censorship and Data Task Force will likely continue to push for the abolition of Section 230, but different perspectives between and among Democrats and Republicans may continue to hinder bipartisan tech policy efforts – Rep. Kevin McCarthy (R-CA), currently the front runner to become Speaker, has not historically supported antitrust efforts and may instead focus on content moderation.
  • The midterms also included state-level tech policy decisions. For example, voters in Montana overwhelmingly approved an explicit ban on law enforcement seizing consumers’ personal data without a warrant in their state constitution. 
  • Stakeholder Response: Tech Policy Press interviewed experts from civil society groups on the impact of the midterm elections on tech policy. Emma Llansó, Director of the Free Expression Project at the Center for Democracy and Technology (CDT) weighed in on the pushback against disinformation research. Yosef Getachew, Director of the Media and Democracy Program at Common Cause, talked about how the slim margins in the House and Senate will likely impact tech policy moving forward. Finally, Matt Wood, Vice President of Policy and General Counsel at Free Press, elaborated on the landscape following the midterms.
  • What We’re Reading: The Wall Street Journal also weighed in on the election’s influence on tech policy. The Register analyzed the predicted impact of the California Congressional block on tech policy in the lame duck period. Axios, CNN, Brookings and Politico all discussed the effects that the U.S. midterm results may have on tech policy. 

Executive Order on Spyware Anticipated in 2023

  • Summary: In late November, the Departments of Commerce and State sent a letter to Rep. Jim Himes (D-CT) announcing upcoming plans for an executive order to “prohibit U.S. Government operational use of commercial spyware that poses counterintelligence or security risks to the United States or risks of being used improperly.” The Biden Administration reportedly plans to announce the executive order in the first quarter of 2023, although the action will be dependent on interagency vetting and approval. The letter comes in response to a September letter from Rep. Himes and fourteen other representatives on the House Permanent Select Committee on Intelligence to Secretary of State Antony Blinken and Secretary of Commerce Gina Raimondo expressing concerns about potentially unethical uses of foreign commercial spyware and pressing the executive branch to address the risks associated with the technology.
  • Stakeholder Response: Rep. Himes responded to the letter at an event hosted by the Center for a New American Security, acknowledging it as a step forward while noting that the proposed executive order stops short of a full operational ban on spyware for the U.S. government. 
  • What We’re Reading: The New York Times reported on the declassification of internal F.B.I. documents on Pegasus and other related spyware. The Washington Post published comments by a senior administration official on the executive order as well as other news in the spyware space. The New Yorker chronicled efforts to sue Pegasus-maker NSO Group in American courts over hacking abuses in El Salvador.

New Legislation and Policy Updates 

  • Integrity, Notification, and Fairness in Online Retail Marketplaces for Consumers (INFORM) Act (H.R. 5502, sponsored by Rep. Jan Schakowsky (D-IL): The INFORM Consumers Act, which requires e-commerce sites like Amazon to verify the identities of top sellers on their platforms more thoroughly and forces marketplaces to disclose basic information about third-party sellers, passed the House 381-39 this month. The U.S. Chamber of Commerce applauded the passage. The Senate companion bill, S.936, was included in the Senate version of the Fiscal Year 2023 National Defense Authorization Act in October; negotiations over this defense bill are ongoing.
  • Informing Consumers about Smart Devices Act (sponsored by Sen. Maria Cantwell (D-WA) and Sen. Ted Cruz (R-TX)): This month, Senators Maria Cantwell and Ted Cruz introduced the Informing Consumers about Smart Devices Act, which would require the disclosure of the presence of microphones or cameras on products. The House version of the bill, H.R. 4081, passed the House earlier this month.

Public Opinion Spotlight

A poll conducted by the Knight Foundation and Ipsos with 1,024 American adults from October 14-16, 2022 regarding midterm election disinformation found that: 

  • “76 percent of Americans think false information about elections is a problem on social media platforms.”
  • “61 percent of Americans are ‘very’ or ‘somewhat’ concerned people in their community might make a decision on how to vote in the 2022 midterms based on false or misleading information,” while just one in four are concerned they themselves might do so.”
  • “84 percent of Americans believe social media companies should restrict posts offering to buy or sell votes with cash or gifts.”
  • “83 percent of Americans believe social media companies should restrict content that gives voters an incorrect day to vote.”
  • “80 percent of Americans believe social media companies should restrict content that misleads voters about how to fill out or submit their mail-in ballot.”
  • “69 percent of Americans believe social media companies should restrict claims of election fraud with inaccurate or no evidence.”

Morning Consult conducted a survey from October 29-31, 2022 with 2,202 U.S. adults regarding the data privacy expectations of Gen Z, baby boomers, millennials, and Gen Xers. They found that: 

  • “52 percent of all U.S. adults say that keeping data private and secure is a baseline expectation they have of companies, and doing that alone does not make them think better of a company.”
  • “70 percent of all adults said they would have a more positive impression of a company that explains to its users how it protects their data.”
  • “71 percent of all adults said they would have a more positive impression of a company that gives users the option to delete stored personal data.”
  • “66 percent of all adults said they would have a more positive impression of a company that does not collect or store any personally identifiable information.”

– – –

We welcome feedback on how this roundup and the underlying tracker could be most helpful in your work – please contact Alex Hart and Kennedy Patlan with your thoughts.

The post November 2022 U.S. Tech Policy Roundup appeared first on Tech Policy Press.

Compute Accounting Principles Can Help Reduce AI Risks Wed, 30 Nov 2022 14:35:39 +0000

The post Compute Accounting Principles Can Help Reduce AI Risks appeared first on Tech Policy Press.

Krystal Jackson is a visiting AI Junior Fellow at Georgetown University’s Center for Security and Emerging Technology (CSET), where Karson Elmgren is a research analyst and Jacob Feldgoise is a data research analyst. Andrew Critch is an AI research scientist at UC Berkeley’s Center for Human-Compatible AI (CHAI), and also the CEO of Encultured AI, a small AI-focussed video game company.

Computational power, colloquially known as “compute,” is the processing resource required to do computational tasks, such as training artificial intelligence (AI) systems. Compute is arguably a key factor driving AI progress. Over the last decade, it has enabled increasingly large and powerful neural networks and ushered in the age of deep learning. Given compute’s role in AI advances, the time has come to develop standard practices to track the use of these resources. 

Modern machine learning models, especially many of the most general ones, use orders of magnitude more compute than their predecessors. Stakeholders, including AI developers, policymakers, academia, and civil society organizations, all have reasons to track the amount of compute used in AI projects. Compute is at once a business resource, a large consumer of energy (and thus a potential factor in carbon emissions), and a rough proxy for a model’s capabilities. However, there is currently no generally accepted standard for compute accounting.

Source Epoch AI: An estimation of the total amount of compute used by various models, in floating point operations per second (FLOPs).

There are two critical reasons for compute accounting standards: (1) to help organizations manage their compute budgets according to a set of established best practices and (2) to enable responsible oversight of AI technologies in every area of the economy. AI developers, government, and academia should work together to develop such standards. Among other benefits, standardized compute accounting could make it easier for company executives to measure, distribute, and conduct due diligence on compute usage across organizational divisions. Moreover, we need such standards to be adopted cross-industry, such that top-level line items on accounts can be compared between different sectors. 

Many large companies already build substantial internal tracking infrastructure for logging, annotating, and viewing the project-specific usage of compute. Cloud-computing providers, such as Amazon Web Services (AWS), Google Cloud Platform, and Microsoft Azure, provide users with tools to track how their resources are spent. However, there is not yet an industry standard to document compute usage. 

This absence of standardized compute accounting contrasts sharply with the situation for other essential resources and impacts that span across industry sectors, like financial assets, energy, and other utilities, as well as externalities such as carbon emissions, which are tracked using accounting standards. For instance, companies do not invent their own financial accounting software to keep track of money; they use ready-made solutions that work across banks and payment platforms. A single company can easily use multiple banks at once and consolidate all of its revenue and expenditures into a single standardized bookkeeping system using Generally Accepted Accounting Principles (GAAP). Standard practices enable apples-to-apples comparisons between organizations, which in turn fosters trust between investors, lenders, and companies. This trust adds significant economic value by facilitating well-informed transactions of all kinds.

In contrast, the absence of a compute accounting standard makes it challenging to exercise due diligence and audit compute usage. Both activities rely on consistent measurement and record-keeping, which currently does not exist across the industry or even, in some cases, between a large company’s divisions. This makes it more difficult for companies to conduct due diligence, for organizations to track and audit their use of these resources, and for governments and researchers to study how compute relates to AI progress, risks, and impacts. For example, without a compute accounting standard, measuring the environmental impact of AI training and inference has proven to be challenging.

There are many unanswered questions concerning the best approaches for compute accounting standards which further research should address:

1. Tools for Companies

With vendor-agnostic compute accounting tools, small companies would not need to invent their own compute accounting practices from scratch; they could simply employ publicly available best practices and tools. Furthermore, if compute providers offered usage reports in a standardized format, then consumers of compute — small and large businesses alike — could more easily track performance across their usage of multiple providers simultaneously. Instead of copying or reinvesting in these systems, companies could reduce costs by picking from a menu of accredited standards from the beginning. A mixture of copying and reinvention already happens to some degree, and there are efficiency gains to be made by standardizing the choices involved at start-up time.

How researchers can help: Continue to build and develop open-source tools for estimating and reporting compute usage. Several programming libraries and tools exist to calculate compute; however, many only estimate compute usage instead of measuring it, while others are vendor specific. Software developers could create general compute accounting tools to build a foundation for implementing practices and standards. 

2. Tracking Environmental Impacts

Compute accounting standards could help organizations measure the environmental impacts of their business activities with greater precision. The cost of compute has decreased significantly, enabling many resource-intensive AI projects; however, the increase in compute accessibility has also increased the risk of high-carbon emission projects. Standards that facilitate tracking environmental impacts as part of a risk calculation could allow organizations to manage resources to meet their environmental goals and values. Tracking compute in a standardized way would help elucidate the relationships between energy use, compute, and performance in order to better manage tradeoffs in building AI systems.

How researchers can help: More research is needed to evaluate the environmental impacts of AI. We do not fully understand where and how energy is used in the AI development pipeline. When developers report final training information, they usually do not include previous training runs or consider how energy is sourced. Research into the environmental impact across the AI pipeline and how we can track that impact would help inform metrics and reporting practices. 

3. Critical Resource Tracking

A standard compute accounting measure would enable industry-wide tracking of this critical resource. Such a standard would make it easier for industry associations and researchers alike to study how compute is distributed. A standard would also help policymakers decide whether additional measures are needed to provide equitable access to compute—building, for example, on the mission of the National AI Research Resource.

How researchers can help: Determine what barriers exist to equitable access to computational resources. Identify the best ways to measure and track these disparities so the resulting data can be used to help remedy inequity.

4. Assessment of Scaling Risk

In addition, careful tracking of compute could aid in risk assessment. As AI systems scale up in some domains, they can exhibit emergent capabilities – ones that were entirely absent in smaller models. Since models with emergent capabilities may pose new risks, organizations should consider imposing additional safeguards and testing requirements for larger AI systems. A consistent means of counting the compute used to train AI models would allow for scale-sensitive risk management within and between organizations.

How researchers can help: Additional research on the scaling properties of different model types would help determine the relationship between compute and capabilities across domains. 

– – –

Standards development organizations should convene industry stakeholders to establish compute accounting standards. Specifically, standards bodies such as NIST, ISO, and IEEE should begin working with large companies that have already developed internal practices to track and report compute usage to establish readily-usable standards that are useful to businesses everywhere. Additionally, technology and policy researchers should conduct relevant research to help inform a compute accounting standard. These actions would help realize the benefits of compute accounting for all stakeholders and advance best practices for AI. 

The post Compute Accounting Principles Can Help Reduce AI Risks appeared first on Tech Policy Press.

Three Priorities to Rein in Big Tech in Times of Election Denialism Tue, 29 Nov 2022 13:15:00 +0000

The post Three Priorities to Rein in Big Tech in Times of Election Denialism appeared first on Tech Policy Press.

Karina Montoya is a reporter and researcher for the Center of Journalism and Liberty. She has a background in business, finance, and technology reporting for U.S. and South American media.

This essay is part of a series on Race, Ethnicity, Technology and Elections supported by the Jones Family Foundation. Views expressed here are those of the authors.

Apr 3, 2021: Art Deco facade of the Federal Trade Commission Building in Washington, DC. Shutterstock

Americans share the view that something is seriously wrong with the way big technology platforms intermediate social communications. By 2021, 8 in 10 Americans believed that large social media platforms helped spread disinformation more than reliable news. The amplification of online disinformation — a catchall term used here to refer to false or misleading material used to influence behavior — has indeed become a monumental problem. The spread of the “Big Lie,” the unsubstantiated claim that President Joe Biden was not the legitimate winner in the 2020 presidential elections, has come to represent the extreme nature of this problem.

More specific concerns vary across political aisles. Conservatives call out “fake news” and decry censorship of their views on social media, so many want to strip these platforms of their power to moderate content. For many on the right, the solution is to substantially reform or repeal Section 230 of the 1996 Communications Decency Act, which allows platforms to curate content while protecting them from liability over the vast majority of the speech they host. Liberals see outright lies being propagated over social media and believe platforms are not doing enough to remove them, so they defend the ability of platforms to develop content moderation tools like fact-checking and labeling, the suspension or removal of accounts, and the exercise of more oversight of political ads.

Fortunately, a way exists to address these concerns while also helping to deal with many other problems with today’s informational environment. But to get there we need to broaden the conversation and consider how a combination of three different policy levers can be used to that end. Specifically, we need to look at how a combination of competition policy, data privacy rights, and transparency requirements for platforms can be made to work together toward meaningful reform. By prioritizing efforts on these three fronts, we can not only go a long way toward solving the problems of disinformation, which peaks in election seasons, but also ameliorate other dangerous knock-on effects threatening democracy, such as the eroding economic foundations of independent journalism.

These policy fronts also present an opportunity to tackle how Big Tech’s operations exacerbate harm to communities of color and other vulnerable groups, such as non-English speaking people. The lack of antitrust enforcement, data privacy protections, and platform transparency obligations in the United States affects these communities in multiple ways: as entrepreneurs, they are virtually unable to challenge technology incumbents on a level playing field; as consumers, they are exposed to harms such as unlawful discrimination in digital advertising; and as voters, they are targeted with disinformation and manipulation by politicians and campaigns. 

1. Competition Policy and Antitrust Enforcement

Competition policy can be enforced in three major areas. First, antitrust enforcement can prevent mergers likely to lessen competition, such as deals between rivals in the same market (horizontal mergers), and between companies with buyer-seller relationships in the same supply chain (or vertical mergers). Second, antitrust enforcement can help prevent business practices that threaten competition or that entrench the market power of big firms. Enforcement requires empowering federal agencies, such as the Federal Trade Commission (FTC) and Department of Justice (DOJ), to aggressively prosecute violations of antitrust law, including pursuing the breakup of dominant corporations.

Under the competition policy that was in effect up to the 1960s, today’s Big Tech corporations would have faced antitrust suits by federal enforcers and other private parties. At that time, the understanding of antitrust law — grounded in the Sherman Act in 1890 — upheld the idea that market concentration in the hands of a few players is likely to weaken competition and bring harmful consequences for workers, small businesses, and consumers. Policymakers and courts were wary of mergers between large firms seeking efficiencies, since their control of large swaths of the market could lead to price hikes or present an insurmountable obstacle for new entrants.

But beginning in the 1970s, the interpretation of antitrust law broke with the past. Conservative legal scholar Robert Bork argued in his book The Antitrust Paradox that antitrust regulation existed to maximize consumer welfare, and that promoting efficiency was the best way to achieve that. Using this lens, policymakers became less concerned with the potential for mergers to harm innovation or small businesses, and courts began to gauge potential harms as those felt by the consumer in the form of price increases. About fifty years later, and within six months of taking office, President Biden issued an executive order seeking to reinstate the historical reading of antitrust laws in order to foster healthier competition.

Antitrust laws regulate many types of business practices and market structures potentially harmful to the American economy. A subset of them govern the need to break up existing monopolies and prevent mergers that lead to, create, or maintain, outsized corporate power: the 1890 Sherman Act, which prohibits any monopolization or attempt to monopolize a market, and the 1914 Clayton Act, which bans mergers and acquisitions likely to lessen competition or that can create a monopoly, as described by the FTC. The FTC and the DOJ Antitrust Division can bring federal antitrust lawsuits, as can state attorneys general, in addition to enforcing their own state antitrust laws. Courts ultimately decide how to apply antitrust law, on a case-by-case basis.

In the last two years, several antitrust cases have been brought to court. The DOJ Antitrust Division launched a probe over Google’s monopolization of the online search market. The FTC brought a complaint that Facebook’s acquisition of Instagram and WhatsApp was anticompetitive, aimed at killing nascent competition. Both cases are moving forward in federal courts. At the state level, a coalition of 17 states and territories led by the Texas Attorney General sued Google over monopolization of the digital advertising market. The case is also making progress in a New York district court. 

The digital advertising market makes a good case for the dangers of unchecked mergers. Most of the digital advertising that happens outside social media is placed through programmatic exchanges. A very simplified version of how this market works is to imagine three ad tech products: one that serves publishers, another for advertisers, and a third one, ad exchanges that connect the previous two. The price for an ad is set through real-time bidding: publishers offer their ad inventory and advertisers bid for it. Ad tech that serves publishers pools them all together according to the demand for certain audiences, and the ad exchange picks the winning bid in a split second. As incredibly efficient as it sounds, it’s also a very opaque system rife with fraud.

The largest ad tech companies in all these segments belong to either Google or Facebook. Google, more specifically, is under scrutiny by the Texas AG for allegedly using its dominance in the ad exchanges market to coerce publishers to manage its inventory with DoubleClick, an ad tech company Google acquired in 2007. But Big Tech routinely downplays the conflicts of interest it creates by operating in multiple segments of the same supply chain. Last year, during the debate on the Digital Markets Act, experts in the European Union called this dynamic out. “Google is present at several layers of the [ad tech chain], where it has large market shares at each stage. The advertiser receives hardly any information on readers of the ad. […] This opacity is almost by design, and could be in itself a manifestation of abuse of market power,” a European Commission report reads.

Another way to enforce antitrust laws is to target practices that entrench the power of dominant market players. One such practice is self-preferencing. This happens when a firm “unfairly modifies its operations to privilege its own products or services,” which would violate Section 2 of the Sherman Act, writes Daniel Hanley, policy analyst at Open Markets Institute. In the EU, Google faced a heavy fine for using its power over online search to favor a shopping service it had in 2007. In the U.S., Amazon uses the same tactics to favor its own brands above those it competes against, reports The Markup.  

This is exactly the behavior that the American Innovation and Choice Online Act seeks to correct. Even though the bill has bipartisan support, and was approved months ago for a full Senate vote, Sen. Chuck Schumer (D-NY) failed to bring it to the floor prior to the November midterm elections. Its approval is so critical that the White House plans to push for its passage before Republican lawmakers — many of whom are ready to oppose sound antitrust enforcement — shift Congressional priorities in January. 

Market concentration stifles innovation and shuts out competitors, all of which disproportionately affects entrepreneurs, including those from communities of color, as they already face severe challenges raising capital and accessing credit. Organizations that would normally fall outside the scope of pushing for antitrust enforcement recognize the impacts of this policy lever. Indeed, racial justice organizations such as Color of Change are supportive of it. In its recently released Black Tech Agenda, antitrust enforcement is one of the organization’s six priorities to advance racial equity in the technology industry. There are indications that the FTC, under the leadership of Lina Khan, and the DOJ Antitrust Division, led by Jonathan Kanter, are supportive of this approach as well.

2. Data Privacy Rights

As the internet progressively transformed from a government-funded experiment into a privatized network, conflicts between online businesses and advocates for data privacy rights grew and persisted. Regulating data privacy can be as impactful as antitrust enforcement, as it can change the balance of power between platforms and users about how personal data is obtained and used. Thus, this policy front can significantly undermine Big Tech’s ability to overtly surveil users online, a business practice that facilitates voter suppression campaigns, hurts independent journalism, and downgrades online safety.

Data privacy rights protect users of digital services when private actors access their information (however it may be defined) in a variety of contexts. Given the expansion of the internet, this regulation is focused on how data is collected, stored, processed, and for what purposes. The United States has volumes of privacy laws, many of which pre-date the internet, and they were mainly to protect privacy in sector-specific contexts, for example, in health care (the Health Insurance Portability and Accountability Act, HIPAA) or education (Family Educational Rights and Privacy Act, Children’s Online Privacy Protection Act). The U.S. does not have a comprehensive data privacy law yet. But it does have a de facto regulator for such matters in the FTC. Similar to its role in antitrust enforcement, the FTC has authority to establish new rules for how businesses collect and use personal data.

In 2016, the European Union passed a seminal law regarding data privacy rights called the General Data Protection Regulation (GDPR), which went into effect in 2018. The GDPR seeks to enhance users’ control over the privacy of their online data, and it became highly influential globally. Among its seven principles, four — data minimization, purpose limitation, security, and accountability — have been adopted by various countries. Based on such principles, the right to opt out of data collection for advertising purposes, and the duty of companies to protect such data from unauthorized use, have also been adopted in recent American state laws, such as the 2018 California Consumer Privacy Act (CCPA).

The global push for data privacy protections also responds to the risks that ad tech poses to users’ safety. As mentioned earlier, ads are placed through programmatic exchanges. This system allows Google, Facebook, and many others in ad tech, to follow users across the web and capture their location, content consumed, devices used, among other data, to feed audience profiles. On top of that, technology giants can leverage data they collect from their own web properties, and harvest more detailed profiles that include race, employment status, political views, health conditions, etc. This business model, called surveillance advertising, “relies on persistent and invasive data collection used to try to predict and influence people’s behaviors and attitudes,” writes media scholar Matthew Crain. 

When the Cambridge Analytica scandal broke in 2018, it demonstrated just how much targeting tools based on surveillance advertising can be exploited, especially to feed disinformation toward communities of color. In 2016, one tranche of former president Donald Trump’s campaign ads — run by Cambridge Analytica — were described by his team as “voter suppression operations” targeting Black Americans with disinformation to dissuade them from voting for Hillary Clinton. The implementation of the GDPR was due to start later in 2018, little after those events, and it could have prevented them and other forms of data exploitation.  But moving forward, due to weak enforcement, Big Tech was able to work around it by forcing users to accept surveillance of their online activities in exchange for access to their services.

Tech firms’ ability to surveil people’s online activities and the opacity of the digital advertising market also undermines the ability of news media firms to produce  journalism sustainably on a leveled playing field. It is untenable for news organizations to try to “compete” under this system — amassing web traffic to get “picked” for a bid and fill in an ad space. Furthermore, most of the advertising budgets do not go to the publishers, but the ad tech complex dominated by Google and Facebook. Capturing swaths of personal information that individuals would not easily give up undermines people’s right to privacy, but Big Tech has normalized this surveillance practice into an unassailable business model. Progressively, this has drawn the attention of the FTC, which is currently preparing new rules to stop the harms of commercial surveillance.

Despite criticism about the faulty enforcement of the GDPR in the EU, lawmakers in the U.S. quickly sought to follow similar standards and establish clearer enforcement. The CCPA, for example, gave users the right to opt out of the sale of their data for advertising purposes. Initially, though, the law’s wording left out many Big Tech corporations because they did not sell but share personal data. Thus, as sharing was not banned, they continued the practice. The law was eventually amended to cover data sharing and selling, so platforms now offer a “Do Not Sell My Data” option that covers both actions. Later in 2020, California passed a new law to strengthen the CCPA with new restrictions to collect and use, for example, a person’s race or exact location. In 2023, the California attorney general, and the newly created California Data Protection Agency, will enforce this law.

Congress has followed suit with a bipartisan bill, the American Data Privacy and Protection Act (ADPPA). Approved with an almost unanimous bipartisan majority by the House Committee on Energy and Commerce in July, the ADPPA provides a clear list of permitted uses of personal data. Although it contains strict language to limit data collection for advertising purposes, it arguably does not go far enough to ban surveillance advertising. But it bans the use of sensitive data for advertising — which would protect information such as health conditions, sexual orientation, immigration status and precise location — unless users opt into it. Currently, this is the most complete data privacy legislation proposed by Congress. 

Whether through Congress or federal agencies, the U.S. is likely on a path towards new federal standards for data privacy rights. How they are designed and implemented can significantly curtail Big Tech’s dominance based on online surveillance, in combination with antitrust enforcement. But neither one is a substitute for the other. The intersection of this policy front with antitrust is still the subject of scholarly discussions, such as in the works of experts such as Frank Pasquale and his exploration of privacy, antitrust and power; or Dina Srinivasan and her examination of competition and privacy in social media. There is an opportunity for policymakers to incorporate such discussions into further legislative or administrative actions, and to apply a racial equity lens, similarly as in antitrust enforcement. 

3. Platform Transparency and Algorithmic Accountability

All large social media platforms that curate content are built on algorithms that pursue user engagement. This logic applies to both paid and non-paid content placed on users’ timelines. The engagement pattern users exhibit, when aggregated, results in new data that informs the platforms’ ranking systems on how to continue curating content. It’s an automated process, and little is known about how these automated decisions are made. Therefore, scholars have a great interest in understanding, for example, how political ads are targeted, how targeting choices influence ranking systems, and what exactly platforms are doing — or not — to prevent harmful effects, such as amplifying disinformation. 

That quest has been met with obstacles. In 2021, Facebook shut down a study — the Ad Observatory Project run by a team of researchers from New York University — that examined ad targeting on the platform, revealing discrimination toward people of color by Facebook’s advertising systems. Facebook asserted that the browser plug-in used to collect data from willing participants in the Ad Observatory project posed privacy risks and involved automated data scraping, which would violate its terms of service, an argument that failed to convince independent experts. Immediately, the move from Facebook reignited calls from researchers for legislation to allow access to platforms’ algorithms to determine their impact on society. 

Today, researchers recognize Big Tech’s business interests conflict with the need for public oversight over social media, and are looking to remedy that situation. For Laura Edelson, co-creator of the plug-in for NYU’s Ad Observatory, it is time to accept that “voluntary transparency regimes have not succeeded,” and federal legislation is needed. Rebekah Tromble, director of the Institute for Data, Democracy & Politics at George Washington University, agrees with Edelson. “It is essential that Big Tech companies no longer have the power to determine who has access [to data for research] and under what conditions,” Tromble said last year.

One main issue at stake is how to open the black box of how social media’s ranking systems work. There is a general understanding that through machine learning models, ranking systems predict the probability of various outcomes, for example, whether a user would click ‘like’ on a post or reshare it, and whether the content is harmful — based on the platform’s policies. These probabilistic models surface the most engaging content, but also the content that tends to be the most harmful. Mark Zuckerberg once described this as a “natural pattern” in social media content. In the view of large social media platforms, they are fighting consequences they either did not foresee or did not hope to provoke.  

But evidence has emerged that Big Tech firms have more knowledge than their leaders publicly admit about the harms their platforms inflict on society, and that they fail to disclose such harms and provide timely redress. Reports have shown, for example, that in advertising, Facebook’s algorithms discriminate against people of color in detrimental and unlawful ways that strip them of employment or housing opportunities, and that the company spares certain influential users from sanctions when they abuse its platform. Without whistleblowers and significant efforts by researchers and journalists who already find it difficult to access Big Tech’s data, we would not even be aware of these problems.

During election seasons, an acute problem in social media related to ranking systems and how they affect vulnerable groups is the amplification of disinformation in languages other than English. It is already hard to find voting information in non-English languages, and such a void is filled by voters with what they find, for example, on WhatsApp, which has massive reach among U.S. Latino users. Amidst the myriad of moderation policies Big Tech activates during elections, what matters is how automated systems apply them. Currently, it is impossible to know how many resources are dedicated to or how ranking systems work with languages other than English in the U.S., let alone in other countries.

To enable public oversight that allows these and other findings to be disclosed more readily, researchers call for policies mandating platform transparency and algorithmic accountability. Platform transparency policies, for example, could authorize Big Tech corporations to give access to researchers to anonymized data for public-interest studies, or alert regulators when they learn their algorithms start causing harm. Algorithmic accountability policies would charge platforms with carrying out independent audits of their algorithms to prevent racial discrimination, for example, and they would be penalized if problems persist.

In 2021, at least four bills were introduced in Congress that propose some version of these measures. One such bill, the Platform Accountability and Transparency Act (PATA) reemerged in a recent Senate Homeland Security Committee hearing as a potentially adequate measure. This bipartisan bill would compel platforms to grant researchers access to anonymized data through a clearing process overseen by the FTC. If platforms fail to comply, they would lose their liability protection for hosting third-party speech provided by Section 230. But given the number of bills proposing  platform transparency rules, experts such as Edelson have pointed out the need to identify unifying principles across these proposals and the urgency to move forward with a more thorough platform transparency reform.

Another policy tool to learn from is the EU’s Digital Services Act (DSA), which tackles platform accountability and speech moderation on the continent. For example, EU’s Article 40 compels platforms to provide access to certain data to researchers vetted by EU Commission-appointed officials. The DSA also mandates large platforms to conduct risk assessments to their algorithms to investigate how illegal content, as defined by EU law, spreads. The White House has reportedly voiced support to create a voluntary mandate in the U.S. that would reflect some of the DSA proposals, but whether that interest will prevail remains to be seen sometime early next year, when the Trade and Tech Council meeting between U.S. and European leaders will take place.

As large social media platforms increasingly act as essential infrastructures for social communications, public oversight can be facilitated by platform transparency and algorithmic accountability rules. These become urgent in light of  how technology corporations are prone to use other concerns, such as data privacy, as an excuse to shield themselves from transparency requests. In the case of the Ad Observatory project, for example, Facebook argued that it shut the project down to comply with a privacy decree made by the FTC when the Cambridge Analytica scandal broke out. Eventually, the FTC called on the corporation to correct that record.


Following the emergence of the internet, the outsized power of a handful of corporations now shapes how Americans perform their social interactions. YouTube, Facebook, and Instagram lead on measures of Americans reporting social media use. Google has a worldwide market share of 93 percent in search; together with Facebook, they practically hold a duopoly in the digital advertising industry. Market concentration, online surveillance, and the lack of platform transparency obligations are fundamental to Big Tech’s business conduct, all of which perpetuates and exacerbates known harms to people of color and vulnerable groups in new, more pervasive ways than in other more strictly regulated markets.

The three policy fronts described here are not an exhaustive list, nor a comprehensive technology policy prescription. But they tackle the critical areas where Big Tech has more influence, and where regulators, researchers, and journalists have found the most disturbing risks to democracy and social and racial equity. With a split Congress, GOP leaders like Ohio Congressman Jim Jordan (R-OH) have more room for maneuvering the policy focus away from these three fronts toward, for example, the view that specific content moderation restrictions should be imposed on social media firms. It will be key for the White House, policymakers who seek bipartisan agreement, and other organizations that support reforming these policy levers, to elevate their voices if meaningful progress is to be attained by 2024.

The post Three Priorities to Rein in Big Tech in Times of Election Denialism appeared first on Tech Policy Press.

Dissecting Tech Manifestos Sun, 27 Nov 2022 14:11:00 +0000

The post Dissecting Tech Manifestos appeared first on Tech Policy Press.

Audio of this conversation is available via your favorite podcast service.

For this episode of the Tech Policy Press podcast, I had the chance to speak to Chris Anderson, Ph.D., a professor of sociology at the University of Milan who is leading a course on tech manifestos and their evolution, inviting his students to dissect the language for what it can tell us about politics and power.

Documents such as A Declaration of the Independence of Cyberspace and A Manifesto for Cyborgs have given way to more vacuous statements from billionaires, such as Mark Zuckerberg’s Facebook manifesto, Building Global Community. These days a lot of Silicon Valley’s leaders don’t have much in the way of ideas, but they do have a lot of money, so either way they can push whatever agenda they may have on the rest of us. From promises of abundance delivered by artificial intelligence, to a ‘global community’ convened on social media platforms, to reimagined economies or even a new world order built on the blockchain, tech manifestos remain important, since they often signify large amounts of capital are about to be deployed to try to manifest someone’s new vision.

What follows is a lightly edited transcript of the discussion.

Justin Hendrix:

Chris, tell me a little bit about your research practice. What do you look at and what is your expertise?

Professor Anderson:

So probably the thing that anyone who asked you what my research expertise would say is that I study journalism and I study the news. And I think that’s fair and that’s true. But the trick is I’ve never been a journalist. Many people who study journalism have been or are. So I am interested in journalism, I am interested in news, but I’m mostly interested in news and journalism as an institution that makes knowledge, that creates knowledge for how people know things. And in the case of news and journalism, what it does is it makes knowledge about current affairs, current events and society at large.

So I think of studying journalism just like you might study science, or you might study libraries, or you might study sociology, or any institution that creates knowledge. And so to me, that’s what I study, and I just so happen to do it through the lens of news and journalism.

Justin Hendrix:

Of course, I suppose through that lens you have a particular interest in not only media, but also media and technology and the role that technology plays in our lives and in the world. And I have of course followed your research and followed your writing, but was struck this summer when you tweeted the syllabus of a course that you’re teaching this fall on tech manifestos. So how did you come to teach an entire course on tech manifestos?

Professor Anderson:

I just started this new job, as I said in the bio, at the University of Milan. And being that it’s in Italy and I don’t really speak Italian much at all and I have to teach classes in English, one of the classes that they gave me was this class with the title, the very sort of meaningless title I have to say, called Languages of the Media. That was the title of the class, Languages of the Media. And the main thing about this class is that it’s taught in English.

For the purposes of giving it to me, that’s its sole distinguishing feature is that it’s one of the few undergrad classes at Milan that’s taught in English. But I kind of decided to take the title seriously and start to think about how might we think about this idea of the relationship between media and language. And what is a language of the media? And also what might be useful for Italian students to learn in ways that will both help them think critically and also join if they wanted to join industry someday, what might also be useful for them? Because that’s the thing about these common classes, you always have students who want to be critical thinkers, but you also have students who just want to get jobs.

So I thought, what is some of the most interesting language about technology and about the media? I can’t remember if I had been reading a manifesto or why I thought of this, but I was like, man, what if we did a whole class on analyzing and reading manifestos, primarily though not only technology manifestos? So we read some other fun ones as well, but what if the centerpiece of the course were these tech manifestos? And then I sort of thought, you can learn all sorts of stuff about the media that way. You can learn why does Mark Zuckerberg feel the need to write a manifesto? Why should he care about needing to put his thoughts that way? What is it about technology? What is it about the internet? What is it about digital media that leads these billionaire capitalists to want to sound like Carl Marx, and write these things that sound like radical manifestos?

So that was the starting point. From there on it was sort of off to the races and trying to figure out how to teach a class like that. But I guess I do think that understanding the technology manifesto as a thing and as a genre teaches us an incredible amount about the media world we live in.

Justin Hendrix:

So I see on the syllabus that you do spend a bit of time on the history of tech manifestos. Now this is a relatively short podcast, but can you give us the kind of canned or brief history of the tech manifesto?

Professor Anderson:

The very canned history,… and so I’m right in the middle of this now, and we just read John Perry Barlow’s Declaration of Independence for Cyberspace, which is partly an answer to your question. So I think, look, the history of these tech manifestos is sort of the history of the internet in some way to be very, very blunt about it. You begin with stuff like the Hacker Manifesto and these very early, very alternative, very subaltern, very fringe communities writing manifestos about technology and the role it played in their lives and what technology did for them.

The second stage is, you see people like John Perry Barlow and The Cluetrain Manifesto, Dave Weinberger, who were continuing the manifesto style and are still writing it from very much an individualistic perspective. A bit more web 1.5, 1.0 to 2.0. So it’s a bit more tied into the world of commerce and into the world of business than stuff like the Hacker Manifesto was, right? So that’s phase two.

And then by phase three you have this sort of genre where this style and this type of rhetoric is now being, as I said earlier, embraced and repurposed by the heads of these gargantuan, potentially very evil, or some people see them as evil companies. So that’s the history of the internet. You go from hackers as these sort of renegade people living on the fringe to sort this middle period, John Perry Barlow says people of the world have nothing to do with us, leave us in peace, you’ll never regulate us, don’t pass laws about us, there’s no reason, you have nothing to do with us, go away. To Mark Zuckerberg saying, well, the purpose of Facebook is that we want everyone to be in a community. And that to me is the history.

Justin Hendrix:

Is there a kind of, I guess pre-history rooted maybe more in science fiction or other utopian sorts of visions of the future?

Professor Anderson:

I mean, there’s always been this really neat relationship between utopias, SciFi, and technology. And I do think that there’s a total connection there between these scientific visions of the future and the need to write a manifesto in order to embody that vision in words.

Manifestos are a lot like science fiction in a sense, is that they’re calling into being a world that doesn’t exist but could. And what manifestos do that SciFi doesn’t is that, so there’s this philosopher, JL Austin who has this idea of something he calls the speech act. And the speech act means if you speak it, it is an act itself. By speaking it, you do an action. So the traditional idea of this is a wedding. You say I now pronounce you man and wife, and that’s more than just words. That creates a legal change in the actual world. People are now married by speaking who weren’t married before.

This relates to manifestos because what manifestos are, they’re gigantic speech acts. The idea is if you say it loud enough and in a particular rhetorical style, you by speaking will make that world, you will create that world through your own speech. And that’s a lot like SciFi. That’s the way that SciFi and science fiction and utopian writing in general has always operated. So I do think that there’s a connection between the manifesto and the idea of writing about technology more generally. There’s a reason why we have tech manifestos and not necessarily, I mean we have some conservation manifestos I guess, but not… There is a radical environmentalist manifesto, but the genre differences are not quite the same.

Justin Hendrix:

I see on your syllabus that people are reading, in fact, Mark Zuckerberg. Are there some other kind of key manifestos in addition to John Perry Barlow that you’re exposing your students to, including some contemporary ones?

Professor Anderson:

So they’re definitely reading John Perry Barlow and that Declaration of Independence of Cyberspace. They’re reading Mark Zuckerberg, as you said, the Facebook manifesto. In terms of stuff about the internet and about technology. They’re reading Dave Weinberger, The Cluetrain Manifesto. They’re also reading something by Sheryl Sandberg, actually they’re reading the Lean In manifesto, which is not really about technology but is obviously by one of the co-founders of Facebook, the Lean in Manifesto.

They’re reading Donna Haraway in the same week, and Donna Haraway is famous for writing something called The Cyborg Manifesto, which is a key document in sort of third wave feminism saying, Donna Haraway has this great line where she says she would rather be a cyborg than a goddess, which means very much part of today’s debates about gender and the social construction of gender. So they’re reading that, they’re actually reading that the same week they read Sheryl Sandberg, so that should be fun.

And then they’re reading some much more historical ones. So they’re reading The Port Huron Statement by Tom Hayden and crew. I think that will be fun for them to read. Probably the most edgy and avant garde manifesto they read was by this woman Valerie Solanas called The SCUM Manifesto. The SCUM Manifesto is famous because Valerie Solanas is known best for having shot Andy Warhol, and not killing him, but nearly killing him. And then they made a film about this, and this was a sort of key counter-cultural moment in the seventies. I read The SCUM Manifesto fully for the first time a couple weeks ago when I read it with my students and, man, it is one of the funniest things I ever read. I got to say, and purposefully so. I mean, there’s a lot of debate about whether Valerie Solanas meant that thing seriously or as a satire, and it is one of the funniest laugh out loud pieces of writing I’ve ever read.

So we’re discovering all sorts of stuff in here, all sorts of fun stuff for them. And again, it’s the languages of the media, which means that the language is really fun and we can really stop and enjoy that language.

Justin Hendrix:

If there is one term or theme that seems to run through the syllabus, at least as I see it, it is bullshit. So what is the relationship between tech manifestos… Well, maybe it’s evident, but what is the relationship between tech manifesto and bullshit?

Professor Anderson:

I don’t think it is evident necessarily. I mean, if it is, I’m trying to keep a question mark for my students for as long as I can. So for me, the key theme of the class, there are two themes. Number one, are tech manifestos all just bullshit? And two, are political manifestos all just bullshit? And if one is and one isn’t, why are there differences? Is it just that we so happen to like the politics of political manifestos and not the politics of tech manifestos? Is it just that we like what one is saying so that’s not bullshit and we don’t like the other one so that is bullshit? Are these all just bullshit? That’s the underlying theme of the class.

And to understand that we have to have a very kind of technical definition of bullshit, which this guy Harry Frankfurt gives us in this amazing little book called On Bullshit, which basically starts by saying there’s more bullshit… I’m not going to get the quote exactly right but he basically says, we live in a world where there’s more bullshit than ever before. Why? And he wrote this in the nineties– this was well before Trump, well before social media, well before everything else. But he’s trying to understand why there is so much bullshit and that requires him first to define it, and the way he defines is amazing. It’s not lying. So that’s the thing about bullshit, it’s not lying, it is speech without caring whether you’re lying or not. So a lie is a lie. There’s the truth, you say something else knowingly, that’s a lie. Bullshit is you speak without caring whether what you say is a lie or the truth.

And that’s what Harry Frankfurt says is growing, and that I think is the thing that’s really relevant to our current political situation right now, this question of the relationship between politics and bullshit. Then that gets into all sorts of other questions, is there more bullshit because of technology? Is there more bullshit because there’s social media, we all are just sort of yapping all the time? So we could go a lot of directions with that question. But yes, that is the underlying theme. And the thing I really want my students to think about is are manifestos all just bullshit? And if so, what does that tell us? And if not, why not?

Justin Hendrix:

It’s kind of extraordinary to go back to the Facebook manifesto, which now is what, more than five years old, and look at some of the language in it and think about kind of what we’ve learned about Facebook’s actual effect on society versus what Mark Zuckerberg put down in his 5,500 word manifesto. It’s almost as if it came out at the beginning of the tech clash. It’s like it sort of marked itself a kind of turning point. But all of these things that he’s saying, this idea that Facebook has a real opportunity to help strengthen communities and the social fabric of our society, the idea that what it’s doing is helping us come together online as well as offline, this notion of a global community at which you’ve already kind of called out as bullshit, it’s really quite an artifact when you think about it. As much or more so perhaps than Barlow’s manifesto. I might catch some guff from this, from some listeners. Barlow’s manifesto perhaps has had maybe more influence, but in some ways Zuckerberg himself is a far more influential character than John Perry Barlow.

Professor Anderson:

Yeah. I think it’s interesting to compare the two. I think you’re absolutely right about Zuckerberg. Historically, Mark Zuckerberg’s manifesto is far more important, and I’ll explain what I mean. Historically, I think you’re right. That is the wedge, that is either the last hurrah of an original idea of tech and what it could be, or it’s the first truly bullshit statement of the tech clash. Either way it’s a hinge moment, and he clearly wrote it knowing what was coming. I don’t think he could have written that without some idea of what was about to happen or from some sort of defensive posture. It wasn’t really written I don’t think as a full fledged, this is the world we want to see. I think there was some element in there where he kind of knew what was on the horizon, and this was almost like a preemptive sort of declaration of principle. And in terms of a historical document, I think you’re right. I think that that’s an incredibly important document.

And I think it’s so hard to read. I mean, I was talking about Solanas being fun to read, it’s really, it sounds like it was written by Mark Zuckerberg, let’s just leave it at that. But I do think that historically it’s really important. Since you brought up the comparison, I’ll just say I think Barlow’s is less historically important because it was less influential, but it might be sociologically more important. And what I mean by that is why would anyone in the world out there at this point still think that Twitter should be a place of free speech? Why in the God’s name, after everything we’ve seen, would you have these characters out there who still think that-

Justin Hendrix:

Of complete free speech?

Professor Anderson:

Of complete free speech. That’s what I mean. People like Elon Musk, and I don’t know if Kanye West, what he’s talking about, but there are these characters running around who on some level, it’s not just they’re conservative, it’s not just that they want to make money, it’s not just that they’re lying. I do think that people do genuinely believe that we should have these tech spaces of complete and utter free speech.

And John Perry Barlow’s document is why. That’s why. Maybe that didn’t cause it, but that represents that belief that we can have these worlds where we can say whatever we want and there will be no consequences to us, and not only that, but it will make the world better. I think that document will have a very, to use the words of another tech guru, I think that will have a very long tail, that document. I think that document will, the ideas of that document will be around for a very, very, very, very long time in the way that Zuckerberg’s ideas to the degree he has any may not be, if that makes sense.

So that to me is the distinction. I mean, I think that cyber libertarian idea, which is almost totally discredited now by all mainstream commentators, is still a real idea. Maybe a bad one, but it’s a real idea. And I think that it will be back. I said this to my students, at the end of the day, I said 20 years from now, there’ll be a backlash to the backlash and you will definitely have this, again, this idea,

Justin Hendrix:

I’m not sure we’ll have to wait 20 years. There are elements of the Californian ideology kind of running through I feel like a lot of what we see coming out of people like Keith Rabois, Peter Thiel, others like that. And they’ve taken it to a much darker place in some ways. So I don’t know if you have to wait those decades.

Professor Anderson:

No, and it’s also an open question. I mean, I hate to get all generational on you, but I mean, what does Gen Z think? There’s a lot of muttering in the people who talk about generations in these very sort of general ways that there’s this idea that Gen Z has different ideas about identity and speech and sort of fun and pranks on the internet than maybe the current dominant discourse does. So maybe there’ll be a lot of Gen Z characters quoting John Perry Barlow soon. I wouldn’t say that, I don’t know enough about it to say for sure, but I wouldn’t discount that as a possibility.

My students seem to really like Barlow, I have to say. They were kind of into it in a way they definitely weren’t about either Zuckerberg or Valerie Solanas. They kind of were like, “Oh wow, yeah, we can go online and be ourselves and be whoever the hell we want to be and say whatever we want.” I think that must have seemed like a real utopia to them, because that’s totally different than any internet they know now. So they heard that and they were like, oh wow.

Justin Hendrix:

I wonder if it matters at all that John Perry Barlow himself was a kind of warm and engaging and fascinating person, whereas Mark Zuckerberg to some extent seems like literally everything that he produces is a kind of public relations labored effort to communicate with others.

Professor Anderson:

It’s hard to imagine, I’ll just say this, it’s hard to imagine Mark Zuckerberg writing Grateful Dead lyrics. Let’s just leave it at that. That is something I cannot imagine. Look, say what you want about the old internet, and lots of people have said very bad things about it and accused people like me of nostalgia for it and this that, the other thing. But whatever else you want to say about it was more fun. It was certainly more fun than whatever we’ve got here.

Justin Hendrix:

I remember that when Zuckerberg published his manifesto in 2017, that Zeynep Tufekci compared it to Barlow’s. But she pointed out that interestingly, the thing that was most significant about Zuckerberg’s manifesto was not really what was in it, but all the things that he left out, all the things that were unsaid, all of the sort of things that were literally happening in the world, even as the kind of reality had begun to sink in, was simply unaddressed.

Professor Anderson:

No, that’s a really good point. I’d actually forgotten about that piece. I need to, now that you mentioned, I need to go back and look at it again. Zuckerberg’s manifesto was written when… So John Perry Barlow said, look, governments and corporations, you have nothing to do with the internet. We built this ourselves, get out. And Zuckerberg is the most sure fire manifestation that that was wrong. Zuckerberg is a creation of corporate America and the lack of government regulation. So to the degree you see the lack of regulation as government action, he is the perfect mesh of government and corporate world.

And yet he wrote about all of this as if what John Perry Barlow said was still true. So he has this remarkable ability to write as if The Declaration of Independence for Cyberspace was a true description of reality when he in fact is the pure example that it’s not at all. So I mean, yes, that’s absolutely right. I mean, John Perry Barlow could at least kind of log onto Usenet or whatever and still feel like what he was saying kind of seemed real to his lived experience. Zuckerberg, he has no such excuse.

Justin Hendrix:

I’m sure that before he died, Barlow said many things about Facebook, but I would reckon it would be the opposite of the web that he imagined certainly back in the days of the declaration.

Professor Anderson:


Justin Hendrix:

But let me switch gears with you slightly. One of the things that’s happening right now, or at least it seems to be, is that there are certain tech figures, and Elon Musk is one of them, Zuckerberg is now one of them, there are many others, who have gained so much wealth and capital– I think of Marc Andreessen– that their kind of manifesto thinking, they have enormous amounts of capital to wager against or to invest in making that manifesto real. At what point are they kind of prosecuting their manifestos with all that capital? And it seems to work. I mean, we’ve seen entire markets shifted based on manifesto language around things like the shared economy or what have you.

Professor Anderson:

The emergence of a manifesto I think is the sure sign that something’s going to happen in tech capitalism. If you want a clue that somebody is going to be doing something somewhere, wait until the manifesto kind of comes out. Maybe they don’t use the word manifesto, but when that document… I mean look at crypto, and I should caveat this by saying I know nothing about this. To me crypto seems crazy. To your listeners, I know literally nothing about this topic, so this is to me just an example. But crypto is one of the things that’s the most saturated with manifesto language. I mean, I think there’s manifesto language all over crypto. And so I think with that level of disruption and that level of going after a particular system, it needs to be accompanied by the verbal groundwork to lay the ground verbally for big moves. You know what I mean?

It’s interesting, because people who believe in economics or people who were skeptical of language would say that none of this is necessary. You don’t need to have a crypto manifesto in crypto, just do it. Just who cares what you have to say? But clearly these people doing this disagree, for whatever reason. And this is not a class where I interview manifesto writers, but I would love somebody to do a study and talk to these people and ask them generally, what are you doing with this? Why do you feel the need to write this? And even if they’re totally full of shit, what they said would still be interesting. I would love to know what people think they’re up to when they write these things. So yeah, I think that when a manifesto appears, that’s a sign that there’s going to be market movement somewhere.

Justin Hendrix:

Is there a sense that we spend so much time or pay so much attention to these tech manifestos because it feels like the only place where we can really contest what’s actually going on, what the rules are for society? It just strikes me that maybe there’s some connection back to how broken our politics are in so many democracies, and whether this sort of contested space of how tech will work and how it will relate to society, and crypto’s a good example because you’ve got these socialist utopian kind of visions and individuals trying to advance that particular point of view. But then you’ve also got these staunch libertarian or even further right kind of characters who see a very different future.

Professor Anderson:

I think that we live in the most discursive world, I mean I’d say I’ve ever seen, but every day it’s more discursive. The struggle around meaning and language is real, and it takes up an immense amount of time of people who care about politics. Now, I don’t know if it takes up the time of your average man or woman on the American street. A lot of people debate this. But people do go up all the time about woke… People, your average Americans have plenty of things to say about woke, people being woke. I mean, they don’t know what that means, but it seems to bother them.

But we live in a world where almost all politics is linguistic. Not really, but it’s like the iceberg has flipped. It’s like the top of the iceberg is huge, and it’s all words. And the bottom of the iceberg is institutional, economic, political politics, and that’s very tiny going on below the surface. And that’s not the way that an iceberg should be. And iceberg should be with some talking at the top, and then below the surface there are all these institutions and economies and power structures that sort of determine the way that things are.

So maybe manifestos are just part of that. Maybe they’re part of that larger linguistic struggle that’s happening right now about seemingly every aspect of American politics. For sure. It is a 100% of the time, seven days a week, 24 hours a day linguistic struggle to define terms. That’s certainly not good and that’s not my idea of what politics should be, but Donald Trump is evidence that if we don’t take it seriously, we sort of concede the field to the enemy. Because a lot of people looked at Trump in 2015 and they said, oh, he’s just words. He’s just rhetoric. He’s just this postmodern man kind of trying to create reality with his bluster and his bullshit. And he did. He did, and he does to this day. Donald Trump has nothing but discourse. He has no power other than his mediated image. I exaggerate a bit, but he’s like the worst nightmare of postmodernism come to life. No one I think would’ve thought it would ever really get like that. So yeah, and manifestos are part of this struggle I think.

Justin Hendrix:

I’m going to ask you a last question. One of the things that I’m interested in, particularly around tech policy and language, is the way in which certain terms migrate into law or migrate into policy. And we’re seeing this now, of course, very much so with people attempting to kind of think through ways to regulate artificial intelligence or to stipulate rules around platform transparency or around content moderation. And there are certain terms, I mean artificial intelligence is itself perhaps the best example, that have now been essentially codified into law. And I suspect that if we went back in time, there might be some connection between the way that industry invested in that term and the way that we kind of think of it today, what it means, what it contains, what it doesn’t.

Professor Anderson:

I think that’s absolutely right. That’s a great insight. Law is dead language. Now, lawyers wouldn’t like me to say this, because for them law is a living thing, but law is freezing language so it has a specific meaning that can be adjudicated in a supposedly fair way. Law freezes linguistic meaning, and policy sort of freezes it more, right? That’s what policy does.

I don’t know enough about the relationship between policy and discourse. That’s just not something I know enough about. I do think that these manifestos and the discourse that surrounds them do ultimately provide the metaphors by which we understand what technology is and what it can do for us. Is the internet a homestead? Is the internet a walled garden? Is the internet an information super highway? Is artificial intelligence natural language processing? Is it machine learning? Is it what they meant by artificial intelligence back in the 1960s? What is it that we mean by artificial intelligence?

And I think that the metaphors we use to talk about these very abstract things do eventually become part of the policy world. Maybe the problem is that by the time they get there, that discourse has already moved on, and that’s a problem. John Perry Barlow was right about that for sure, that cyberspace moves a lot faster than real space. Whatever else you want to say about it, that is I think, true. And by the time that world catches up, that technology world is already at least one step, if not multiple steps ahead already. And that’s a big problem, and I have no idea how we deal with that.

Justin Hendrix:

What do you hope your students will take away from this? If one of them comes back to you in five or 10 years time, maybe they’ve gone to work for a tech firm, what do you hope they’ll say to you?

Professor Anderson:

I hope that they will say that they got a sense of how politics really works. That politics is as much about rhetoric and language and speech and ideas as it is votes and voting, and the legislature or the parliament or whatever. I think that by and large, even now in the 21st century, when young people hear politics, they think voting and they think what people in government does. And politics is language. Politics is speech, politics is metaphors, politics is visions of how things ought to be. And I hope that they’ll learn to be understanding of how politics works in that way.

And the other thing I hope is just that they’ll learn to identify bullshit when they see it and not trust it. I want them to learn to be skeptical but not cynical about the world. That’s my wish for every student I’ve ever taught since I started teaching. Be skeptical about the world, but don’t be cynical about the world. And if you can manage that, you’re producing good citizens. And I think that as teachers, that’s what we all should be up to.

Justin Hendrix:

Chris Anderson, thank you very much.

Professor Anderson:

This was great. I’m really glad we managed to find the time to do it. Thank you so much.

The post Dissecting Tech Manifestos appeared first on Tech Policy Press.

How To Open An Outpost In Social Media Exile Sat, 26 Nov 2022 03:41:29 +0000

The post How To Open An Outpost In Social Media Exile appeared first on Tech Policy Press.

Lacking options, many seek refuge on Mastodon, writes David Carroll, while others embrace change.

It’s de rigueur to own your own social media platform as a vanity project to amplify your own malignant narcissism these days. In the case of a certain billionaire, it seems like he bought Twitter to own the libs. Rather than being owned, a stampede ensued of academics, activists, curious journalists, and concerned citizens who rode en masse into a previously obscure alternative social media space known as the Fediverse (a parallel universe of free and open-source platforms and services built around decentralization). At one peak in the past week, tens of thousands of new Mastodon accounts were being created every hour. The active user base doubled from 1 to 2 million users in the span of a week, according to the software’s chief developer.

Mastodon growth chart mapped to Elon Musk’s moves. Source

The primary software in this alternate universe — which largely replicates the experience of being on Twitter — is called Mastodon. However, Mastodon is not a replacement for Twitter. In many ways, its creators and community want to build an anti-Twitter, or at least a billionaire-proof Twitter. I created an account on the Mastodon project’s own server when it first appeared in 2016, and found it curious, if a little befuddling. I foolishly allowed myself to get worked up into a tizzy about its DM feature not being encrypted. My original account on, which has since become known as the Big Instance, gathered dust. Twitter’s grip on my attention seemed invincible. 

Seeing the writing on the wall in the fall of 2022, the direction Elon Musk was taking Twitter clarified the cause for alarm. By granting amnesty to the most harmful voices while purposefully chasing benevolent voices away, his plan to destroy Twitter as we knew it became undeniable. He’s pivoting our once beloved doom-scrolling app to an all-out shitposting service. In response, I checked out the feeds over on the ‘fedi,’ and sure enough there was a bustle of activity. I noticed chatter about a dedicated, managed Mastodon hosting service that basically offered one-click setup. I may have been one of the last new customers to join before the operator had to suspend sign-ups to stabilize capacity and demand. Being a legacy blue-check person on the birdsite (as it’s referred to by folks on Mastodon), I was dismayed at the prospect of having to rent my own identity, and worse, be charged based on my following and reach. What if I took the $8 soon-to-be-levied by Elon Musk for subscriptions on Twitter and instead spent it on my own Mastodon server? 

Pricing plans from

Thanks to the managed hosting service, I had my newly purchased domain pointing to an IP address and in a few clicks, it magically turned into a brand-new server on the Fediverse. I lasted a day as a single-user instance before opening it to other members. As such, I became what’s known as an Admin, or server/instance owner. Fascinatingly, once you find yourself in the position of running a server on the Fediverse, you are instantly ordained an autocratic ruler of your own domain. You set the rules. You enforce them, even if arbitrarily or capriciously. You basically become the Elon of your own Twitter. You can even hand out blue checks willy-nilly, because you get to control the custom emoji on your server. Weirdly enough, prominent Twitter users started to apply to join my Mastodon instance almost immediately.

It’s been a learning experience. Here are some FAQs and initial thoughts based on my first couple of weeks as an Admin.

Who Built The Fediverse, And For Whom?

Trans, queer, neurodivergent, disabled, and other kinds of marginalized folk who exiled themselves from Twitter during the Trump years not only helped design the protocols and specifications (e.g., ActivityPub is part of what makes the magic work) but also promulgated the distinct culture of care that animates the etiquette and discourse of the Mastodon environment, deliberately trying to make a better version of social media, all while espousing the principles of free and open source software which intrinsically resists overt commercialization. 

However, the Mastodon model has not evolved into true democratic governance, and as a result tensions have simmered between its communities and the Benevolent Dictator For Life (BDFL), Eugen Rochko (@Gargron) who maintains executive control over the Mastodon project through his nonprofit organization in Germany, occasionally infuriating people with the power he wields over the project.

Mastodon’s lead dev remarks on ex-Tweeps forming their own server. Source

Some of the features that these communities persuaded Rochko to adopt include the distinctive CW feature. People are encouraged to set Content Warnings (CW) over a broad range of trauma-related topics. Another way I like to think about this feature is as Consent Widgets, where instead of trying to trigger people on their timelines and getting rewarded for that, you erect some friction akin to a subject line, creating a layer of inferred consent in the timeline experience. A Consent Widget asks, would you consent to opening up my post about an intense topic? This fosters something different from the barrage of neurotic stimulations in the game of performative manipulation on Twitter and other social platforms. For folks trying to manage their mental health or avoid triggering topics, the Consent Widget feature expresses a kindness that you don’t find elsewhere. 

Unfortunately, this well-meaning feature has also been used abusively against people. Features designed with the best intentions shall be weaponized. Likewise, some newcomers resent being relentlessly policed about their use of the CW feature by stalwarts, causing tension in the timelines. Mastodon offers customizable feed filters as part of the way it demands your labors of self-care and curation to confront dissent and diversity as baseline to the federated experience. This new work feels part of the price of decentralizing ourselves by running on free software managed by volunteers or nonprofit workers as our dependance on a centralized commercial social media platform made safe for advertising disintegrates.

The ableist style typical on Twitter, where accessibility concerns are routinely ignored, gets called out on Mastodon where, by contrast, people are expected to write image descriptions for ALT text so that screen readers work properly for the unsighted. Similarly, because #HashTagging is an essential practice on the Fediverse, people can be observed reminding each other to employ mixed case when typing out multi-word hashtags so screen readers can pronounce them.

Consider how this supportive community was suddenly invaded by the influx of people fleeing Elon Musk’s capture and destruction of Twitter. They’ve been stressed by anxiety that birdsite people just don’t get it. The work to create the anti-twitter could be at risk by migrators like me who, instinctively or out of grief, reimpose and reinforce bad Twitter habits rather than taking the fresh start in a new home as the cue to establish new healthier habits.

However, the Fediverse is not automatically a paradise of inclusion and civility, despite its radically different technical structure and business model. On the contrary, the darkest, worst behaviors on the internet are also thriving out here in the hinterlands. Part of becoming your own server administrator is confronting the urgent need to block and defederate your community from the absolute worst-of-the-worst. This adds to the tragedy that vulnerable folks are pushed out into feral social media spaces, rather than feeling safe and included on a centrally administered platform.

A more mainstream network-of-networks within the Fediverse could be emerging, where consensus forms on a so-called fediblock list by which admins circulate lists of systemic offenders and cesspit instances. Part of surviving behind a Mastodon account still involves learning how to deploy a fediblock list, either behind a trusted admin, or on your own by managing your own settings. Either way, one must build out protections for being in the wild. Brainstorms are already circulating from former Twitter engineers proposing an entirely new model of content moderation where instances could outsource and delegate their reports to a professional third-party service provider. Presumably, members would generously fund their own instance, which in turn funds a living wage and commensurate labor conditions for federated content moderation teams. 

These new ideas are necessary because it’s been saddening to witness, in particular, women of color and other groups endure harassment without adequate social infrastructures and community codes of practice already developed. Despite all of the efforts and features described above, the default settings still have a way of preferencing and privileging whiteness. There is a feeling of starting from scratch out here, though. Learning from our mistakes in real time during this experiment, we might build something fundamentally better. 

In these ways, folks are out here in the digital wilderness seeking refuge and a better world. Elon Musk “sub-tooted” the Mastodon community when he referred to “judgy hall monitor” types while exalting our departures, perhaps signaling his intent as admin to move his instance toward brutality rather than mutuality. 

A series of since-deleted tweets referring to Mastodon.

How Is Mastodon An Anti-Twitter? 

Typically, Twitter migrators go through a process of complaining about Mastodon to praising its virtues in just a few days. Here are a few common aspects of this experience:

Decentralizing Means Picking a Server 

Picking a server (or instance) is hard. Despite what you may hear, it’s consequential. The official Mastodon nonprofit page routes people toward Covenant servers which offer certain important protections for users. There aren’t enough good ones to choose from at the moment. An unofficial directory at helps you find servers by size and content policy. Moving to another server is possible but not that easy, especially for beginners. The big servers are probably too crowded and there needs to be a period of decentralizing off of them after the initial influxes towards lateral migration that will form the new neighborhoods of the Fediverse. Curating one’s own feed becomes constant work of selection and curiosity. Finding the right neighborhood in the Fediverse is not so different than it is in real life. 

Using the tool, I was able to locate my instance in the network neighborhood. Source

Non-Algorithmic Onboarding Is Hard

You immediately appreciate how much algorithmic growth-hacking goes into onboarding new users into Twitter, so that a timeline instantly appears to engage the “cold-start” user. On Mastodon, there are no growth hackers or recommendation algorithms, so you are asked to cultivate your own timeline by finding folks to follow. Until you assemble a worthy set of people (and tags) to follow, the experience feels disconnected and disorienting. But once you make some progress tending to your garden, your feed lights up and delivers a satisfying, very-Twitter-like experience, especially because so many of Twitter’s most wonderful people have also joined Mastodon.

Stranger Things Like Multiple Timelines In The Parallel Universe

Unlike the centralized concept of Twitter’s timeline, Mastodon has three distinct reverse-chronological timelines to explore:

  1. Your Home timeline includes accounts and hashtags you follow
  2. Your Local timeline includes posts from others on your instance
  3. Your Federated timeline includes posts from what is getting boosted in the network neighborhood

On small instances, all three timelines should feel manageable and useful, while on the big instances they can get chaotic and overwhelming. If your local and federated feeds aren’t worth scrolling through then that could be a signal to find a trustworthy instance more in line with your own interests. Give it time, though. There is no rush, and it could take awhile for your ideal new neighborhood to develop.

No QTs, No Dunks, No Call-and-Response

One habituation of Twitter is the quote tweet maneuver, but Mastodon offers no such obvious affordance. This is said to be intended to frustrate the instinct to dunk. Interestingly, a former Twitter engineer has posted to Mastodon suggesting that the Quote Tweet feature was not measured to have caused an increase in abuse. Furthermore, Eugen Rochko has hinted at being open to introducing the feature in a future version, much to the dismay of certain marginalized communities who specifically appreciate the dunk-free zone. Other voices lament the omission of the quote tweet affordance as elemental to the playful call-and-response style of Black Twitter. The alternative convention that could be called a Self-Boosted Quote Reply, hasn’t been observed as a species in the wild. 

Full Self-Driving Verification

Mastodon lets you self-verify from another website you control. It’s a bit nerdy, but this approach maintains the essence of decentralization while solving the problem of verifying a profile’s true identity. A unique feature of the Mastodon profile is the way it offers a 2×4 table of meta-data that many people fill with links to other websites. If you control a website enough to be able to write an HTML link back to your Mastodon account with the rel=me attribute, then that website appears verified with a green check on your Mastodon profile. This allows anyone with their own domain to self-verify, assuming the referenced domain serves as a validating reference (i.e., it’s verifiable who owns it). 

The Revolution Will Not Be Monetized

The Mastodon software does not support advertising, the injection of adtech, or analytics. Many instances in the community outright ban advertising. Even parody servers of brands risk being blocked for spam. Instead, the Mastodon project is largely supported by grants and donations on Patreon or OpenCollective, while individual instances not affiliated with the official project are self-supported by donations and the efforts of volunteers who help develop the software and admin/mod servers. 

Freedom Of Movement On Mastodon 

Moving servers is an essential feature of the Fediverse but there are some confusing nuances. You have to export your follow list and re-import them to your new home but your followers will automatically follow you. While you can export your data from an old server, you cannot bring your old posts along to the new one. They stay behind but if you ever want to return and reactivate your old account, it’s there waiting for you, along with your old posts.

The Direct Message Weirdness

Mastodon’s post settings also take time to figure out, because a post can be set to go Public into the feeds of anyone, or set to Unlisted which keeps it within the instance realm, or scoped to Followers only, which limits reach for locked accounts, and then Direct Message for discreet messages and breaking off the timeline mentions.

These Discreet Messages, as I wish they were called, behave very differently from DMs on other platforms and can catch you in awkward situations. For instance, if you mention someone, they are joined into the conversation! But this clues us into how they incentivize a less-performative, attention-merchandising social media modality, where arguing in public and sliding into DMs to talk smack is a bad habit to break. 

DMs on Mastodon also appear in the timeline. This potentially signals their unique purpose. When it’s time to pull mentions off the reply thread, you shift to “Discreet,” where now the branch of the discussion that does not offer value to other people’s timelines is courteously closed. The pattern of being conscious of how you impact other people’s experiences is embedded in the UI.

That said, there is no UI for admins to read your DMs on Mastodon. They are transmitted and stored, un-encrypted, in the database. An admin could be obliged to respond to law enforcement as necessary, or may choose to snoop. You’re reminded in the UI that these are not secure comms.

Adding true encryption to Mastodon seems tricky. If you think Mastodon is difficult to use without zero-knowledge end-to-end encryption, just wait until it’s added and folks have to learn about public and private keys. If you’ve used ProtonMail, you may have a better sense of how difficult it can be to deliver this level of security in a browser, let alone across various open source apps. How does one get around the problem that Admins can reset your password? If we consider Signal, the need for us to expose our phone number is the big tradeoff that makes that platform so easy to use. I encourage newcomers to try to get over this concern for now (yeah, a privacy zealot is telling you to get over this hurdle). You can always choose not to use “Discreet Messaging,” and it doesn’t need to be an obstacle to your adoption of Mastodon generally.

Most people seem to hate Mastodon’s DMs and I am weird for thinking they could be good if we imagined them differently. A former Twitter developer who has drafted an initial charter concept for the Fediverse, expressed views on this topic and strong disagreement with how Rochko’s idiosyncratic DMs that don’t meet the usual expectations. It seems there is a strong preference for the conventional understanding of this feature, instead of a new adaptation that prefers moderating attention harvesting over pure privacy. Apparently, the Signal organization has been invited to participate in the process of encrypting DMs on Twitter.

What’s It Like To Admin An Instance?

Managed Or DIY

As a “Managed host,” I don’t worry about the infrastructure, I worry about the community. If you’re comfortable on the command line, you could certainly explore going ‘hardcore’ by setting up your own infrastructure on ‘bare metal,’ but then you get to worry about both. 

An Admins’ main community focus is on content moderation tasks, which includes screening new member applications, responding to reports from your members and taking actions like limiting, deleting, or blocking users or entire instances (servers by domain). This is called de-federating, and it’s supposed to be used as a last resort, but I am finding it wholly justifiable to pre-emptively defederate some truly abhorrent servers. It is important to prevent illegal content from ending up on my server. Since launching, I have fielded less than a dozen reports. One of those was a user reporting me, so I myself have already gone through the looking glass of reporting Elon Musk’s account for abuse on Twitter. 

Free As In Freedom But Not As In Beer

Admins are also responsible for monitoring server load and upgrading as needed. I find myself frequently checking my Sidekiq dashboard which shows real time activity. When your server can’t keep up with the load, a backlog starts to build up. The more people you support, the more processing threads and storage you’re going to gobble up. There are costs involved — luckily my members have already generously funded current capacity for 1 year through a PayPal link in my bio. I’m currently serving about 300 active members on a service tier costing $49 a month (comes to $2 a year per person).

A screenshot of my MastoHost dashboard. To spawn a thousand more well-moderated Mastodon servers, it needs to be this easy to set up and manage.

Get Real

I’ll need to formalize certain aspects of the instance after I reach a member base milestone, including appointing a co-admin and moderator team; getting set up with a fiscal sponsor who would want to sponsor us at OpenCollective, or; setting up my own LLC, or finding partners to establish a co-op or non-profit. My server is in France, so Europe’s General Data Protection Regulation (GDPR) applies; that means I may need an EU-based person on the team. I registered with the US Copyright Office for DMCA, because now I may need to respond to a takedown request. Formalizing the above with governance would help us become a Covenant server and get listed in the main directory. That would mean the instance might be recommended to new registrants from the Mastodon project itself.

Should You Set Up Your Own Server?

There is an acute demand for new, well-moderated spaces to welcome new members to be on-boarded, as well as for relieving over-crowded “official” instances. The testimonials of Black users and other people of color who are having horrible experiences on the big instances are shameful and unsurprising, given that content moderation does not scale that well anywhere without significant resources. But in my experience, smaller servers should have the tools and wherewithal to build safer communities because they operate at a smaller scale and yet interface with the larger network neighborhood. This involves people willing to serve on the frontlines of a decentralized information war to protect their own communities. Mistakes will be prevalent. Resolute patience is compulsory.

Hopefully, someone will clone the UX for managed services so that infrastructure responsibility is outsourced and simplified into a web interface. If hosted in the US, the privacy protections will be weaker, for now, but the legal status might be somewhat clarified. When that happens, I hope a thousand new servers bloom. It’s also entirely possible that the challenges of running your own vanity social network pile up after a wistful honeymoon phase. Only time will tell if it’s a fool’s errand, best left to professionals and a sustainable business model, or an important moment of punctuated equilibrium..

Hermes BBS running on a period Macintosh. via Reddit r/retrobattlestations

Closing Thoughts

As a genXer myself, I have vivid memories of various Revenge-of-the-Nerds moments in the history of technology, where a peer-to-peer innovation shakes up the status quo. The first such occasion came in high school when I ran a BBS (bulletin board system) over dial-up modems (2400 baud, baby) on local telephone lines using free shareware software called Hermes. Weirdly enough, back then, people dialed into each other’s computers and posted on message boards. There were GIFs. Then in college, Napster (and later LimeWire, BitTorrent, etc.) upended the music industry and triggered the pivot to digital subscriptions. It showed that people would tolerate an unintuitive, nerdy UX in order to get what they wanted, how they wanted it. In many ways, setting up a Mastodon server is a re-kindling of this spirit of re-working our digital life around a peer-to-peer exchange instead of a centralized service. That moment when my Mastodon feed sparked to life somehow automatically as friendly familiar faces appeared echoes that uncanny moment when I typed a favorite album into Napster. For skeptics of Mastodon, I wonder if this would suggest that this weird software is not itself the future of social media, but rather portends disruption. It points to an epic power struggle over who controls the means of social media production and distribution, a struggle that may well be with us long after Elon and his ilk have decamped for Mars, or beyond.


Learn the syntax for sharing your Mastodon address as a clickable link:

Format: @handle@instance.tld


URL Format: https://instance[.]tld/@handle 


Mastodon Instance URL shortcuts

Three are buttons and menus for these pages but if you want to skip direct or bookmark a feature, you can add an instance domain name in front of / to complete these shortcut paths:

/explore See any* server’s trending posts, topics, articles

/directory See any server’s member directory

/public See any sever’s federated timeline

/public/local See any server’s local timeline

/about See any server’s information and rules

/privacy-policy See any server’s privacy policy

/settings/profile Shortcut to edit your profile page

/settings/import Shortcut to import CSV lists into your account

/settings/export Shortcut to export data from your account

/settings/aliases Shortcut to prepare moving to new instance

/settings/migration Shortcut to leave an old instance

(*Note: not all instances open these links to the public, applies to Mastodon 4 and up)

Find Your Twitter Friends on Mastodon — Easiest to use because you login to both your Twitter and Mastodon — Scan linked Twitter for CSV file to upload into Mastodon — Full-featured CSV generator for Twitter

Mastodon Mobile Apps

Bookmarking your instance to an icon on your device’s home screen works surprisingly well. 

iOS: The official Mastodon app limits you to your Home timeline only which is sort of like training wheels for Mastodon.

MetaText: more fully featured, free and open source

Toot!: delightful paid app that lets you lurk and engage across instances without being a member

Mammoth: currently in TestFlight beta and seems very promising

Android: Folks seem to recommend Tusky

The post How To Open An Outpost In Social Media Exile appeared first on Tech Policy Press.