Home

Donate

Can The Digital Services and Digital Markets Acts Complement “Algorithmic Policies” In Safety, Accountability, and Protection For Designated Groups?

Yonah Welker / Jan 22, 2024

Following the Bletchley declaration, many governments are pursuing a risk-based approach to algorithmic and AI safety. Despite the general agreement, countries are still in different stages of deployment of this vision. For example, the AI executive order in the US requires safety assessments, civil rights guidance, and research on the labor market impact of the technologies. It also established the US Artificial Intelligence Safety Institute. In parallel, the UK’s government created the AI Safety Institute and recently enacted the Online Safety Act, echoing the approach of the European Union, which is in the final stages of negotiating the AI Act and is already enforcing the Digital Services Act.

Other efforts from multilateral agencies and institutions, such as UNESCO, WHO, and OECD, focused on area-specific guidelines to address algorithms in education, healthcare, labor market, literacy, and capacities-oriented recommendations. It includes UNESCO’s AI competence framework for students and teachers or a recommendation to set the minimum age at 13 years old when generative AI can be used. Moreover, its recent action plan to address disinformation and social media’s harms (including the case with the use of generative AI) collected responses from 134 countries, including Africa and Latin America. Similarly, governments from 193 countries signed their commitment to effectively implement children’s rights in the digital environment with the adoption by the United Nations General Assembly’s Third Committee. OECD also issued a report and technology repository reflecting how “AI supports people with disability in the labor market.”

This multifaceted approach to algorithmic governance brings attention to high-risk areas such as health, education, labor, policing, justice, and legal systems, impacts on minors, and designated and vulnerable groups. A critical missing piece of these approaches is addressing the impact of algorithmic systems and other related technologies on persons with disabilities. Facial recognition algorithms may not properly identify individuals who lack limbs, facial differences, asymmetry, speech impairment, different communication styles or gesticulations, or those who use assistive devices. In another example, these systems may use ear shape or the presence of an ear canal to determine whether or not an image includes a human face. Yet, it may not work for groups with craniofacial syndromes or lacking these parts.

Since the initial proposal of the EU AI Act in 2021, the European Commission has received appeals and comments addressing algorithms and disability rights, the use of biometric, facial, and emotion recognition systems, and cases affecting refugees and immigrants, including automated risk assessment and profiling systems. However, research and development of disability-centered AI systems is still a complex task from a technology and policy perspective. It requires an intersectional approach that accounts for a range of conditions, age, gender, and spectrum-specific parameters, as well as an understanding of multiple legal frameworks.

It also underscores the need for non-AI-specific frameworks such as the Accessibility Act (which expects its further iteration in 2025), the EU Digital Services Act and Digital Markets Act, the Convention on Rights of Persons with Disabilities, equality and child protection laws, and involvement of specialized institutions and frameworks, thus going beyond just forming generalized “Algorithmic Safety Institutes.” In particular, the Digital Services and Digital Markets Acts cover “gatekeepers”– big technology companies and platforms. These acts have specific articles to address fair competition and minimize silos, improve accountability and reporting systems. For user protection, they include measures for algorithmic transparency, outcomes and user consent, and specific protections for minors and designated groups. They also seek to discourage dark patterns and manipulation.

Can these frameworks, along with the upcoming Accessibility Act 2025, complement AI and data regulation to better protect groups with disabilities and bring more transparency and accountability while minimizing technological and economic silos?

AI systems, designated groups, and regulation

AI systems regulation addressing designated groups or persons with disabilities is not limited to one legal document but by a spectrum of legal frameworks, laws, conventions, and policies. In particular, such cases can be regulated and affected by AI-specific acts, related data, consumer and human rights frameworks, memorandums, and conventions. For example, the US is known for its “Americans with Disabilities Act,” the UK has the ‘Equality Act,“ France - Bill N2005-102 “for equal rights and equality” of opportunities and the inclusion and citizenship of persons with disabilities,” and Germany the “General Equal Treatment Act.” There are similar examples in other countries.

The Digital Services Act (DSA) and Digital Markets Acts (and related regulations in other countries, such as the UK’s Online Safety Act) can also have an impact as they aim to “create a safer digital space where the fundamental rights of users are protected and to establish a level playing field for organizations. For instance, assistive technology used to support dyslexia or autism can be affected by articles in the AI and data regulation, specific laws protecting children and designated groups such as the Convention of the Rights of Persons with Disabilities, and country-specific accessibility, equality, and non-discrimination laws.

In particular, the DSA creates rules for online platforms around transparency, accountability, explainability of algorithms, use of “dark patterns,” minors protection, targeting and profiling, privacy and consent, and manipulation. It also establishes a feedback loop between platforms and stakeholders, and designates “digital service coordinators for each member state. These mechanisms will benefit all Europeans, but also offer the potential to protect persons with disabilities and other designated groups.

How the DSA, DMA, and other frameworks may bring more protection to persons with disabilities

Disabilities present combinations of spectrums, conditions, stakeholders, and technologies, making it difficult to regulate with general AI regulation. However, they can be addressed by consumer, digital, and data-protection acts targeting specific violations resulting from algorithms. In the case of the Digital Services Act and Digital Markets Act, the new laws could improve mitigation in specific areas:

  • Minors protection. Several users, including parents and caregivers, can use assistive technologies and digital platforms for individuals with disabilities. DSA directly introduces articles related to minors’ protection (Article 28) and a set of stakeholders that can be involved in the digital ecosystem, such as trusted flaggers and moderators (Article 19), as well as parental controls, and flagging mechanisms.
  • Invisible risks, dark patterns, and manipulation. Online platforms are known to mislead or manipulate users, including vulnerable groups, into making certain choices. These companies can use “dark patterns” - a user interface that has been carefully crafted to trick users into doing things, such as buying overpriced health insurance or medical services targeting particular groups or health conditions. Recital 67 of the Digital Services Act introduces, explains, and explicitly bans the use of “deceptive and exploitative design patterns.” It also brings an open list of existing examples of such practices overlapping with articles of existing data regulation and consumer protection, including at the level of the Member States and EU-wide.
  • Privacy breaches and consent. In some countries, governmental agencies were accused of using data from social media without consent to confirm patients’ disability status for pension programs. Both the Digital Services and Digital Markets Act explicitly complement and do not override existing member states data regulation and GDPR requirements associated with personal data protection and consent. Additionally, the DMA enforces obtaining valid user consent prior to any personal data being collected or used via gatekeepers' platforms. However, it still requires more work protecting particular groups, particularly those with physical or mental impairments.
  • Profiling and targeting. Medical and social services can use “profiling,” which may lead to discriminatory outcomes. Similar to the GDPR's articles addressing profiling, the DSA introduces additional requirements to restrict targeting and profiling. In particular, it fully prohibits targeting based on the profiling of children. Targeting is also prohibited when profiling uses special categories of personal data, such as ethnicity, religion, or political views. For other groups, it requires companies to provide a detailed explanation of profiling and the ability to opt out of recommendations based on profiling.
  • Accuracy, fairness, and transparency. Algorithmic distortions and errors related to disabilities are largely associated with historical and statistical underrepresentation. For instance, companies develop facial recognition systems without acknowledging specific facial impairments. Legal and judicial systems are trained on publicly available data sets, overlooking the participation of particular groups and populations. In a similar way, recommendation and personalization algorithms are known to be less efficient when deployed towards designated and vulnerable groups. Article 29 of the Digital Services Act requires online platforms (specifically VLOPs - very large online platforms) to explicitly explain the mechanisms of recommendations, content or product rankings, or other algorithms, as well as the ability to modify or influence the parameters behind them. improve accuracy for specific groups and individuals.
  • Public oversight, scrutiny, and accountability. The DSA introduces not only the tiers of platforms, designated digital services coordinators, and the system of oversight and communication with national authorities and courts, but also obligations related to public transparency and accountability. In particular, Article 40 of the DSA includes an obligation for very large online platforms to provide access to the data necessary to monitor their compliance with the regulation to competent authorities and “vetted researchers.” Very Large Online Platforms and Search Engines (tier 4) have already started to publish transparency reports under the DSA, and at least five platforms participated in a “stress test” related to harmful content and actions, including impacts on minors and vulnerable groups.
  • Accessibility. The DMA introduces the requirements to freely modify, install, or uninstall third-party apps, which may help to add third-party accessibility tools targeting visual and hearing impairments, and dyslexia, including EU-built apps and the Commission-funded projects. It will be complemented with the EU Accessibility Act 2025 which will help to regulate the market of assistive and accessibility solutions.
  • Risks-based tiers categorization and compliance. Similar to the AI Act, the DSA uses the 4 tiers of risk-based categorization, driven by the scope of companies and platforms, influence on users, and social, economic, and business leverage. A higher tier means more requirements for protection, accountability, and transparency, closer oversight and involved mechanisms of audit, and higher fines, penalties, and compensations for affected citizens. In particular, DSA designates 4 tiers based on the type of the platform and number of users, which are directly connected to the platform’s complexity, volumes of content, and products. Tier 1-3 is overseen by member states and “competent authorities,” at the same time, Tier 4 is directly under the Commission’s supervision and affected by the most obligations, which helps to better address cross-state investigations and antitrust cases. Such an approach allows regulators to better address the reporting of platforms associated with higher risks and more complex cases for vulnerable groups affected by online attacks or abuse. It includes not only addictive design but more serious violations such as assault, and content promoting hatred or violence. This structure may also provide a better mechanism of compensation and litigation for affected groups through a protection and feedback loop with relevant authorities.

Accessibility Act and the Way Forward

Algorithms do not create biases but rather mirror social and historical distortions presented in society, statistics, existing practices, and approaches. Mitigating algorithmic risks towards designated groups is a rather complex process, which necessitates the increased role of non-AI-specific legislation and moving beyond just forming “Algorithmic Safety Institutes.”

Thus, the DSA and DMA logically complement AI and data policies by categorizing the platforms, organizations, and market players behind the algorithms. To do so, it weighs their economic position and influence, the potential scale of risks and responsibility, and mechanisms of oversight.

The Acts present an opportunity to address non-algorithmic distortions better, including social and market factors, competition, and silos. They also bring a critical component: the participation of different types of designated stakeholders and the feedback loop between developers and users, creating a consistent set of values and expectations and transparent reporting.

Finally, to better address the needs of the population with disabilities, it is important to see algorithms, platforms, and the topology of assistance and accessibility as connected ecosystems. This objective could be better achieved with the European Accessibility Act. This directive aims to improve how the internal market for accessible products and services works by removing barriers created by divergent rules in EU Member States. It covers products and services that have been identified as being most important for persons with disabilities. This directive could be complemented by specialized frameworks, guidelines, and repositories developed by UNESCO, WHO, OECD, and other multilateral organizations and institutions.

Authors

Yonah Welker
Yonah Welker is a serial technologist, public expert, and voice for advancing the development of algorithms and policies to advance society, human capacity, and designated groups and spectrums, including those with cognitive and neurodisabilities, such as social and human-centered AI, robotics, emer...

Topics