Home

Platform Regulation in APAC and the EU: A Comparative Overview

Agne Kaarlep, Ilana Rosenzweig, Noah Douglas / Sep 27, 2023

Agne Kaarlep is head of policy and advisory at Tremau, a technology trust & safety start-up located in Paris. Ilana Rosenzweig is a trust & safety professional who has led teams across APAC and EMEA, including at Twitter. Noah Douglas is an early-career researcher and policy professional.

Online platform regulation is advancing across the globe, with the European Union (EU) and the Asia Pacific (APAC) region at the forefront. The EU has enshrined its digital strategy as a flagship policy, while the past years have seen new and far-reaching regulations in various countries in the APAC region. As the EU cements its position as a leading regulatory force and APAC emerges as the fastest-growing digital market, the approach towards online platform regulation in these regions is of critical importance.

This comparative overview delves into the regulatory convergence and divergence between the EU and APAC, exploring the implications for services operating across these regions. The question thus arises as to whether services active in both regions face two widely disparate regulatory silos. We offer a brief overview of EU and APAC approaches to platform regulation, derived from an extensive analysis on the nature of selected regulations from the EU and APAC countries.

The Expansion of Content Based Regulations: Content-Specific to General Approach Laws

Both regions first approached platform regulation by targeting specific kinds of content and are now regulating platform behavior more generally. In the EU, platforms have had to comply with content-specific laws addressing such disparate phenomena as copyrighted content [AK1] and terrorist content. The 2019 Copyright Directive, the Terrorist Content Online Regulation (TCO), and the proposed Regulation to combat child sexual abuse material (CSAM) tackle specific content types. In addition to sector-specific laws, online platforms active in the EU must now comply with the Digital Services Act (DSA). In force since Nov. 2022, and requiring compliance from Very Large Online Platforms (VLOP’s) since August 2023, the DSA differs from prior laws by regulating how online platforms and other intermediaries provide their services generally.

Similarly, in APAC there are abundant sector and content-specific laws spanning audio-visual content (e.g., Australia’s Abhorrent Violent Material Act (AVM) and South Korea's Amendments to the Telecommunications Business Act and Network Act (TBA) regarding illegally filmed content[AK2] ), CSAM (e.g., Philippines' Republic Act no. 11930), and foreign interference through dis- and misinformative content online (e.g., Singapore's Foreign Interference Countermeasures Act, or FICA). Recently, we have also seen general-scope laws in APAC, like Australia’s adoption of Basic Online Safety Expectations (BOSE) and the Codes of Practice for Online Industry as authorized by the amendments to Australia's Online Safety Act (OSA), and Singapore's Online Safety Act of 2022 (OSB).

This shift to more general-scope regulations signifies a comprehensive approach towards online platform regulation, reflecting a growing recognition of the dynamic nature of challenges present in the digital landscape. For online platforms, it underscores the need to adopt holistic measures that tackle not just specific instances of harmful content but also broader systemic issues within their services.

A Dynamic Landscape of Platform Obligations: From Content Removal to Risk-Based Strategies, to Good Old Naming and Shaming

Online platform regulation is far from a static domain; it’s an arena of continual evolution and transformation. We have witnessed a progressive shift in the way obligations are imposed, where the “first-generation” regulations that centered around content removals are increasingly layered with general-scope and risk-based obligations. The “first generation” of online platform regulation is built around removal obligations, whereby companies must remove specific instances of content, usually following a court or administrative order by a law-enforcement agency (LEA) or regulator. Content-specific laws in the EU and APAC strongly rely on this approach Such is the case of the EU's TCO and proposed CSAM Regulation, South Korea's TBA, and Singapore's FICA and Protection from Online Falsehoods and Manipulation Act (POFMA), and amendments to Australia’s Online Safety Act of 2021.

But as the digital environment grows in complexity and significance, so does the regulatory landscape. Risk-based obligations represent the "second generation" of regulations, marking a new era that recognizes the varying degrees of risks inherent in online platforms. These emerging, general-scope laws in both regions complement removal duties with a risk-based approach to platform regulation and/or general obligations to take steps to limit certain content. Online platforms may therefore face additional obligations to assess risks present in their services and minimize the harm they cause to their users.

The EU's DSA leads the way in the risk-based approach, mandating very large online platforms (VLOPs) to assess and mitigate systemic risks associated with illegal content, recommender systems, fundamental rights, and more. In APAC, the risk based approach is also present, merely in a slightly different set-up. In Australia we see a similarly layered approach where risk-based general obligations are layered[AK3] on top of specific content removal requirements and take two forms. The Australian E-Safety Commissioner was authorized by the OSA to publish the Basic Online Safety Expectations and to notify Industry Codes to complement the notice and removal obligations of Online Safety Act. These expectations encourage companies to conduct safety risk assessments throughout the product life cycle. Singapore's Online Safety Act also allows binding codes of practice to impose risk assessment obligations (although the code of practice has not taken up this opportunity).

Both regions also recognize that not all platforms are made equal and adopt a tiered approach in the obligations. In the EU, the DSA sets 45 million users as the threshold from which a significant risk management framework kicks in, de facto noting that presence of a platform in the lives of over 10% of the EU population comes with heightened risk. In contrast, the OSA in Australia does not pre-mandate a specific threshold but rather creates a system where through a variety of measures, the regulator and industry benchmark the risk in a delicate dance.

In the OSA, the eSafety commission can mandate the creation of industry codes “reasonable compliance measures” to achieve “online safety objectives and outcomes of OSA and BOSE. In June 2023, the eSafety Commissioner registered Industry Codes, broadly covering “child sexual exploitation and pro-terror material, as well as material that deals with crime and violence, and drug-related content” and which carry civil penalties for non-compliance (Consolidated Industry Codes of Practice for the Online Industry (Class 1A and Class 1B Material) and Schedule 1 – Social MediaServices Online Safety Code). These Industry Codes expressly adopt a risk based approach based on risk profiles, and start with a requirement that providers perform a risk assessment to identify their Tier of risk. However, then, they go one step further and impose specific “Minimum compliance measures” based on that tiering.

These types of laws also leverage transparency and public accountability as strategic vectors for compliance. While the Basic Online Safety Expectations (BOSE) are legally binding, there are no penalties attached to the obligations themselves (although the industry codes mentioned above can eventually be attached to fines). The powers of the Commissioner are to request information on how companies meet these expectations, 'name and shame' platforms that fall short in meeting these expectations or fine them for failing to provide the requested information.

While the EU's DSA takes a more traditional approach to potential penalties, public opinion is also used as a source of compliance motivation, notably through extensive transparency reporting requirements, mandated third party audits and provisions to ensure researchers can access data. These strategies clearly show a new approach in content regulation, leveraging public opinion and reputation as powerful tools in enforcing regulatory compliance.

What About the "Upload Filters?" Proactive vs. Reactive Obligations and Concerns Around Overreach

Platform regulation in APAC and the EU has evolved from purely reactive removal duties to instances of proactive obligations. APAC has a strong history of reactive regulation mandating prompt removals or other actions, such as correction notices, following a LEA's or regulator’s order – see Singapore's POFMA and FICA, Australia's OSA, and South Korea's TBA. Similarly in the EU, the TCO, and the CSAM proposal require platforms to take action following legally binding decisions by national authorities.

Recently, regulators have sometimes included proactive duties that require companies to proactively remove certain types of illegal content and/or prevent its reappearance. Beyond CSAM, we could see more general proactive duties in Singapore, since its OSB requires platforms to proactively prevent access to content that poses a “material risk of harm”. However, this duty is defined through a Code of Practice that requires proactive detection and removal of CSAM and terrorism content only, requiring that platforms minimize users' exposure to all other harmful content through community guidelines and standards, content moderation, safety tools, and user reporting. Similarly, the AUS BOSE requires platforms to generally take “reasonable steps to ensure safe use,” however the initial Industry Code limited itself to Class 1A and Class 1B material (“child sexual exploitation and pro-terror material, as well as material that deals with crime and violence, and drug-related content”).

A further set of obligations that has arisen are those of “reactive proactivity,” wherein a proactive obligation to prevent future uploads occurs as a reaction to the the identification of illegal content in order to prevent its reappearance. These sorts of duties are perhaps most common in the CSAM space. For example, the EU's CSAM proposal, still under debate, has suggested the creation of a database of hashed images and requires service providers to remove the content. South Korea’s TBA takes this one step further and imposes these obligations on non-consensually filmed sexualized images involving adults.

Clearly, broad proactive duties are controversial. The EU in particular is unlikely to use them, as the European Court of Justice ruled that general monitoring obligations (or duties to scan all content for an indefinite amount of time) are unlawful, a principle now also enshrined in the Digital Services Act. Indeed, historical precedence in the EU suggests that any such expansions, beyond addressing very narrowly specific content with significant safeguards, could face considerable legal challenges. The imposition of additional proactive responsibilities could potentially unlock a Pandora's box of extensive legal confrontations, as many companies may perceive this as an overreach of government regulation into their operations.

Shedding Light on Digital Platforms: Enhanced Transparency and Accountability Measures

As digital platforms become increasingly influential, transparency and accountability are gaining importance in online regulation. In the EU, for instance, the TCO and the Digital Services Act (DSA) have instituted comprehensive reporting requirements and public disclosure. The DSA requires online platforms to produce yearly, and Very Large Online Platforms (VLOPs) to publicize bi-annually, reports covering a wide array of topics. This includes details of orders from member states, content moderation practices and user reporting statistics. In addition, VLOPs are obliged to publicly report on the results of their risk assessments and expected to undertake mandatory audits, at their own expense, to ensure compliance. To further this transparency, the EU Commission may request access to and explanations of the platforms' algorithms and recommender systems.

In the APAC region, transparency is being enhanced through similar, albeit less structured, approaches. In Australia, the eSafety authority has the power to require periodic and annual reports, with non-compliance attracting penalties. The South Korean landscape is evolving in a similar direction. The Singaporean Online Safety Bill stands out in the APAC region, demanding rigorous transparency through annual reports to regulators. These reports must detail the measures taken to combat harmful content and the responses to user reports. While like the DSA, the Act allows for mandatory audits to verify compliance, this hasn't been implemented in the current draft of the Code of Practice. Similarly, while cooperation with expert researchers is permitted, it has not yet been included in the draft.

These developments, unfolding in parallel across both EU and APAC regions, mark a significant stride towards ensuring transparency. This solidifies the accountability of digital platforms and allows users and authorities a clearer insight into the actions and processes driving these online spaces. The alignment of regulatory requirements in these diverse regions signals a growing, global trend towards greater transparency and accountability in the digital world. This universal movement sets a clear expectation for all online platforms, regardless of their geographical presence, to operate in an open and responsible manner, contributing to safer, more trustworthy digital environments.

Amplifying User Agency: Understanding Rights and Reporting Mechanisms Across Regions

It is also clear that users' role in content moderation is becoming more significant. The EU's Digital Services Act (DSA) mandates easily accessible reporting mechanisms for users, keeping them informed about actions taken on their reports in a timely manner. Compared to Europe, APAC laws have traditionally focused more on the obligations of platforms following a government directive, with less emphasis on user reports. However, this is gradually evolving. South Korea's Telecommunications Business Act (TBA) and Australia's Online Safety Act (OSA) are leading examples of the shift towards user empowerment in the APAC region. The TBA mirrors the DSA's approach in many respects, leveraging a third party reporting system involving users, regulators, and trusted third parties. On the other hand, Australia's OSB requires platforms to provide mechanisms for users to report offensive or illegal content. Giving further weight to notices from Australian users, if the platform doesn't respond to the user's report, the user has the right to appeal to the E-Safety Commissioner, who can review the case and issue an order for content removal.

Through these evolving regulatory frameworks, users are given the means to exert their rights and play a proactive role in content moderation. It signifies a shift towards a more inclusive and participatory approach to online platform regulation, where users aren't merely passive consumers but active contributors to the safety and integrity of the digital space.

Conclusion

The EU and APAC regions are both exhibiting significant dynamism in their regulatory approaches to online platforms, despite the existence of distinct regional nuances. While the EU is often perceived as the most active regulator of online platforms, it is not alone. In particular, the APAC region has recently seen a boom in legislative initiatives. Regional differences between them certainly exist. For example, EU law is further along in user empowerment and risk-based duties, while APAC laws give a greater role for government orders and proactive removal duties. Yet, notwithstanding their differences, both regions are following a similar progression: content-specific laws are increasingly layered with general-approach laws; beyond content removals to risk-based obligations, increased transparency of companies moderation decisions and towards a greater role for user reports.

A recent testament to this fluid landscape is the passing of Singapore's Online Criminal Harms Act, which was publicly announced and enacted all during the time this article was being drafted and is therefore not part of the analysis above.. This highlights the extremely fast pace of legislative development in this space, and highlights the complex and ever-changing web of legal realities in the online world. As a result, constant vigilance and adaptability will be necessary for companies that seek to navigate this rapidly shifting regulatory terrain.

This essay has been updated to more accurately describe details of Singapore's regulation.

Authors

Agne Kaarlep
Agne is the Head of Policy and Advisory at Tremau, where she helps online services meet the demands of the new regulatory environment. Before Tremau, Agne worked in the European Commission, where she wrote and negotiated the Digital Services Act and the Terrorist Content Online Regulation. She also ...
Ilana Rosenzweig
Ilana is a Trust & Safety professional who has led teams across APAC and EMEA. Her teams have focused on solving regional and market-specific issues through holistic use of the full cross-section of Trust & Safety policies, tools and expertise. For Twitter, she led T&S's global program to respond to...
Noah Douglas
Noah is an early-career researcher and policy professional in the area of international governance & diplomacy, specializing in its intersections with the digital world. His interests lie in understanding the evolving nature of the global regulatory environment and exploring interdisciplinary approa...

Topics