Home

Does the Digital Services Act Have Anything to Say About the ‘TikTokification of Instagram’?

Rachel Griffin / Aug 2, 2022

Rachel Griffin is a PhD candidate at the Law School of Sciences Po Paris.

As Adam Mosseri said, ‘There’s a lot going on on Instagram right now.’ Last Tuesday, the head of Instagram responded with a personal video statement to an intensifying backlash against changes to its app, which essentially focus on showing users videos and recommending new content, instead of photos and content from accounts they follow. Mosseri’s statement followed an endorsement by Kylie Jenner – one of Instagram’s most followed accounts, and notorious for knocking $1.3 billion off Snapchat’s stock price by tweeting that she didn’t use it any more – of a campaign by smaller creators to ‘Make Instagram Instagram Again’.

This isn’t just a trivial issue for celebrity influencers: Instagram’s design choices affect people’s livelihoods, and they shape the online culture that we all live in. Given the implications for the working conditions of professional content creators and business users, and for media pluralism and diversity, changes like these should be understood as a public policy issue.

Nonetheless, the EU’s Digital Services Act – billed as a new ‘comprehensive’ regulation of social media and other online platforms – barely addresses how design choices like Instagram’s shape the production and dissemination of social media content. The main exceptions, Articles 26-27, mandate assessments and mitigation measures for ‘systemic risks’ – but it remains uncertain how these provisions will be applied in practice. Ultimately, Instagram’s ability to disregard users’ and creators’ interests in pursuit of profit reveals the limits of the DSA’s marketized approach to platform governance.

What’s going on on Instagram?

Instagram’s recent changes include publishing all videos as remixable ‘Reels’, showing users more recommended content from accounts they don’t follow, heavily promoting videos over photos, and testing a full-screen feed for some users. These changes are widely regarded as copying Instagram’s wildly popular major competitor, TikTok, which is built around short-form video and a ‘For You’ page of algorithmically-recommended content from unknown accounts.

On the other hand, professional Instagram creators who built audiences with photography, and users who prefer seeing photos from friends and accounts they follow, have reacted angrily. A Change.org petition started by influencer Tati Bruening rapidly gathered over 200,000 signatures. Mosseri’s video statement came the day after Jenner and her sister, Kim Kardashian, shared Bruening’s ‘Make Instagram Instagram Again’ message – though he later claimed he recorded it before seeing their posts.

On Thursday, Mosseri announced via newsletter Platformer that the full-screen test would be abandoned and that users would temporarily be shown less recommended content while Instagram worked to improve recommendations, but reiterated that the much-criticized shift towards videos and recommendations would continue. In his framing, Instagram is just following its users: users are increasingly sharing videos as hardware and internet connections improve, and the app’s main feed has to be stocked with recommended content because people increasingly use other features like direct messages to share content with friends.

Social media experts immediately pointed out the gaps in Mosseri’s arguments, including the fact that Instagram’s design choices influence what users choose to post. Creator studies scholars like Sophie Bishop and Kelley Cotter have shown how content creators strategically tailor their content to gain algorithmic visibility. In recent interviews, frustrated creators have talked about being forced into making labour-intensive videos because that’s all Instagram will recommend, even when they know their followers prefer photos. The claim that recommendations are necessary when content from friends runs out also only makes sense if you assume that an infinitely scrolling feed of content is needed to keep people spending time on Instagram and generating revenue.

Why does it matter?

EU politicians have made platform regulation a major legislative priority – but to my knowledge, none of them has commented yet on Instagram’s design changes. EU law tends to focus on copyright protection, security issues, and harmful content like hate speech. But the intense online debates around the ‘TikTokification of Instagram’ raise important questions about what social media are for and who benefits from them. These should also concern policymakers.

Millionaire celebrities like Jenner may find it easiest to get Instagram’s attention, but – as content moderation scholar and pole dancing influencer Carolina Are recently emphasized – many of the loudest critics are creators who depend on their Instagram audience for income. As Brooke Erin Duffy shows, social media content creation is often an intensely precarious form of labour: in addition to the usual struggles of creative work, creators must constantly adapt to opaque and unpredictable algorithmic changes which determine their success. Their working conditions are just as important as those of other platform workers, like Uber drivers, whose labour rights have gained increasing attention from European policymakers and the public. It also needs to be emphasized that these changes don’t only impact professional influencers, but any creative worker or small business reliant on social media to attract customers – which is now practically mandatory for everyone from restaurants to tattoo artists.

Instagram is now pressuring all these professional users to post videos which are far more costly and labour-intensive to create. It also forces creators to rethink their content. Creating compelling videos may require them to give viewers more access to their behind-the-scenes processes or personal lives – and for some creators who found success with photography or images which don’t translate well to video, it may not be possible. These changes will inevitably exacerbate existing inequalities of visibility, as better-resourced creators who can invest in video equipment, skills and/or professional help will find it much easier to gain an audience.

Finally, it’s not only professional creators who matter. Our ability to engage with, create and share media is valuable in itself, not only as a source of income. As media theorists like Nick Couldry and Mary Gray argue, being able to actively engage with media and participate in collective cultural production matters for individual wellbeing and self-development, and for social justice. If amateur photographers and meme accounts whose content brings themselves and others joy can no longer find an audience online, this is also a legitimate concern for media policy.

What about the DSA?

If we accept that platform design decisions like these are an important issue in social media policy, what does EU platform regulation have to say about them? The Digital Services Act, which was agreed by legislators this summer and should come into force by 2024, has been billed as a ‘comprehensive rulebook for the online platforms that we all depend on’. But looking at its provisions, it seems like the answer is ‘not much’.

In my research, I’ve argued that EU social media regulation is generally overly focused on the ‘content level’ – identifying individual posts, comments or other discrete pieces of content which are harmful, and making sure they get removed – at the expense of more structural aspects of platform governance. For the most part, the DSA is no exception. The law creates a tiered set of due diligence obligations: some apply to all online intermediary services, some to online platforms for user-generated content, and some to ‘very large online platforms’ with over 45 million EU users. Those in the first two categories focus almost exclusively on the content level.

Platforms will have to state clearly in their terms and conditions what content they allow (Article 12(1)), consider fundamental rights when implementing these policies (Article 12(2)), publish yearly transparency reports (Article 13), and provide an appeals procedure for users (Article 17). However, all these provisions apply only to content moderation – defined in Article 2(p) as actions platforms take to identify and address content which is illegal or violates their terms and conditions. Users can’t rely on any of these provisions to understand or challenge platforms’ underlying decisions about what kinds of content their interfaces allow and what will find an audience.

The only exception in these sections is Article 24a, which was added late in the legislative process and requires online platforms using recommendation systems to clearly state in their terms and conditions the main criteria used in these systems, the reasons for the relative importance of those criteria, and any options users have to change them. With this provision in force, Instagram will have to give users basic information about how and why it is promoting video content and unknown accounts.

But of course, it’s just done this anyway – and left critics extremely unsatisfied. It's also unlikely that more transparency about recommendations will translate into more influence for creators or accountability towards users. On Wednesday, after announcing the first ever revenue drop for Instagram’s owner, Meta, CEO Mark Zuckerberg reiterated that promoting videos and recommending unknown content will be its strategy to increase engagement and growth. Given its market dominance and the lack of alternative platforms where creators can reach comparable audiences, Instagram is likely perfectly capable of riding out this wave of criticism without any meaningful changes.

This brings us to the DSA’s provisions on ‘very large online platforms’, which set out various obligations for dominant platforms like Instagram that go beyond the content level, aiming to address more systemic issues. Two aspects stand out as particularly relevant: Articles 26-27, which require platforms to identify and mitigate ‘systemic risks’ arising from features including platform design and recommendations, and Article 29, which requires them to provide at least one recommendation system option which isn’t based on personalized profiling.

One of the Make Instagram Instagram Again campaign’s key demands is to bring back the app’s original chronological feed – where users just see content from people they follow in chronological order, with no personalized recommendations or prioritisation. This would also be an obvious option that a platform could introduce to fulfil its obligations under Article 29.

But in fact, Instagram already did this in June, before the current surge of public criticism. The obvious reason critics aren’t satisfied is that - as discussed above - recommendation systems and other design features don’t just influence which content users see, but what is created in the first place. Unless a critical mass of users manually switch to chronological feeds, creators will still have to make videos and tailor them to suit recommendation algorithms if they want to find an audience. Article 29’s focus on individual user choice doesn’t address the systemic effects of recommendation systems on media content.

On the other hand, Articles 26-27 raise interesting questions. It’s clear that design changes like those at issue here are within the scope of the provisions, which explicitly mention recommendations and interface design as features that platforms must consider and might have to change – if they could foreseeably harm important public values such as public health and fundamental rights.

Interestingly, the text agreed by legislators this summer adds a provision that relevant fundamental rights include ‘freedom of expression and information, including the freedom and pluralism of the media’. Given the implications of Instagram’s changes for the visibility of smaller creators who lack the resources to create high-quality videos, they seem obviously relevant to media pluralism. It’s also arguable that they will affect other systemic risks specified in Article 26, like the spread of illegal content and disinformation (video content is more difficult to moderate, and TikTok-style presentation of short-form videos with little context appears to exacerbate some issues around disinformation). Accordingly, it seems that Instagram should at least have to formally assess these risks, and potentially adapt its platform design to mitigate them.

What will this mean in practice?

Ultimately, everything will depend on how the provisions are interpreted and enforced. Platforms must submit their risk assessments to the DSA’s enforcement authorities on request (Article 26(3)), but in the first instance these provisions rely on privatized enforcement, through companies’ internal compliance procedures and yearly independent audits (required by Article 28). It would hardly be difficult for Instagram to produce a risk assessment which namechecks relevant systemic risks, then explains why they aren’t that significant and why the changes should go ahead anyway. The real question is whether regulators would accept that, or would use their proactive enforcement powers to demand serious engagement with issues like media pluralism and substantive policy changes.

Ultimately, the DSA aligns with a marketized approach to platform governance, which mostly ignores questions around how platforms like Instagram shape the production and dissemination of media content at scale. Under its content moderation-focused provisions, dominant companies are free to design and optimize their services for profit, as long as they offer users fair and non-discriminatory access to those services in accordance with their contractual terms. Articles 26-27 have the potential to permit a more expansive approach, where the largest platforms are required to prioritize public interests over profit. It remains to be seen whether regulators will take this opportunity.

This article references the final text of the DSA recently agreed by the EU Parliament available here.

Authors

Rachel Griffin
Rachel Griffin is a PhD candidate and lecturer at the Law School of Sciences Po Paris. Her research focuses on European social media regulation and its implications for structural social inequalities.

Topics