Home

Asia’s Privacy Regulators Shape AI Policy On The Ground

Seth Hays / Apr 25, 2024

South Korea Personal Information Privacy Commission (PIPC) Chairman Haksoo Ko. Source

While countries in Asia currently have not gone as far as the EU in passing an all-encompassing piece of legislation to regulate AI, the region’s privacy regulators are actively shaping AI governance and policy with existing tools. Examining recent AI-related policies and activities from across Asia reveals a diverse array of policy innovations by regulators, which should be shared more broadly within the region and globally.

In particular, recent AI-related privacy policies from Australia, Singapore, South Korea, Hong Kong, and New Zealand focus on practical business use cases and experiences, understanding the public’s concerns, promoting rights-protecting policies, and exploring areas of AI policy not yet included in the global discourse, such as indigenous peoples’ data sovereignty.

While developed economies in Asia have more established privacy laws and enforcement agencies, privacy frameworks in Asia’s developing countries are not as robust. For example, only within the past two years, India, Indonesia, and Vietnam passed privacy laws. To better address AI regulation, more should be done to share the practical experience of regulating AI. Ultimately these efforts will help prevent a digital divide in AI governance and policy in Asia.

Australia’s whole-of-government approach

In August of last year, the Office of the Information Commissioner (OAIC) – Australia’s privacy regulator – issued a response to the government’s discussion paper on “Safe and Responsible AI.” This response provides insight into how the OAIC may regulate and shape AI in Australia from a privacy perspective.

The OAIC response notes that Australians are concerned about AI’s impact on their privacy, citing the 2023 Australian Community Attitudes to Privacy Survey (ACAPS), which found that 43% of Australians regard AI systems that use their personal information as one of the biggest privacy risks they face.

The OAIC notes that the Privacy Act is a principles-based and technology neutral piece of legislation – making it applicable to any AI technologies, both current and future.

The response also notes that the February 2023 Privacy Act Review Report proposed 116 amendments to improve the Act in order to ensure it is fit for purpose in the digital age, including to address AI-related technology. For example, one proposal regards promoting transparency in how automated decisions are made, and to include this information in accessible privacy notifications.

The OAIC response emphasizes the need to have effective resourcing and enforcement tools, noting calls for mid-tier civil enforcement penalties. The OAIC also calls for applying some of the best practices in privacy governance – such as privacy impact assessments (PIAs) in high risk-settings, and developing a similar AI impact assessment when AI tools are used in high risk ways.

The report notes the need for international interoperability of rules – such as those developed by privacy and data protection authorities at multilateral forums, including at APEC, the Global Privacy Assembly, Asia Pacific Privacy Authorities, the Common Thread Network and the Global Privacy Enforcement Network.

Singapore’s pro-business stand

On March 1, Singapore’s Personal Data Protection Commission (PDPC) issued the “Advisory Guidelines on Use of Personal Data in AI Recommendation and Decision Systems.” The guidelines provide businesses in Singapore some certainty on how they can use personal data while developing or deploying AI systems that assist humans with or autonomously make decisions, recommendations, or predictions.

The guidelines underscore that organizations can use personal data through four main routes:

  1. When meaningful consent is given,
  2. Under the business improvement exception,
  3. Under the research exception, or
  4. Through anonymization.

The guidelines provide examples for how some of the exceptions above can be applied. An example of the business improvement exception may be when personal data is used to check for bias in an algorithm, and to debias datasets used for AI systems.

The guidelines note that the Personal Data Protection Act of 2012 (PDPA) applies to the collection and use of personal data in AI systems, and therefore consent and notification obligations apply to these systems. The accountability obligation is also stressed in the guidelines, making sure that policies and processes are established and regularly updated by organizations.

The guidelines stress that organizations should be transparent in their operations so that users can provide meaningful consent, and to ensure that AI systems are trustworthy – more so when the impact is high on consumers. It notes that some best practices may include developing model cards or system cards – standardized formats for sharing relevant information about AI tools to the wider public (think “nutrition labels” but for AI systems).

Other best practices include data minimization – whereby only data that has attributes required to train and improve AI systems are used, and to pseudonymize or de-identify personal data – as a basic control. Other best practices include data mapping, labeling, and lineage documentation.

South Korea’s regulators focus on consumers

In March the Korean Personal Information Privacy Commission (PIPC) set guidance for AI and the protection of personal privacy. Chairman Haksoo Ko – also a member of the UN’s High Level Advisory Body on AI – issued new guidance for firms using AI for automated decision making (ADM). Data subjects who are subject to ADM have the right to request explanations for how ADM is made, and for reviews of the decision.

The PIPC notes that such mechanisms will promote trust in AI services and systems. Furthermore, the proposed rules may allow data subjects to refuse an ADM if it impacts significant rights or obligations, subject to certain contract terms and exceptions. The guidance provides examples where this may apply – such as in employment and hiring settings, or fraud detection systems – but stipulates it would not apply in advertisement or recommendation systems.

Also in March, Korea’s PIPC investigated the use of personal information by large language models (LLMs) available in the country, and provided guidance on best practices. The Commission was particularly concerned about resident registration numbers and credit card numbers which may have been included in training data, and the lack of age verification mechanisms to prevent users under the age of 14 from using the tools. It noted that it also kept track of vulnerabilities and would continue developing policies and follow-up measures in this fast-evolving space.

PIPC is also working to develop more privacy enhancing technologies. For AI and big data analysis this takes the form of pseudonymization – whereby personal information is masked, but still utilizable for machine learning. It conducted a public consultation on best practices in this area in April.

New Zealand focuses on rights

New Zealand’s Privacy Commission recently conducted a consultation on proposed rules around biometric technologies – one of the leading sources of concern for rights infringement in AI policymaking. The proposed rules are examining how organizations should evaluate whether to use biometrics and how to best evaluate use – such as through proportionality, transparency, and in what cases biometrics should not be used at all. The consultation is meant to build a code of practice for organizations utilizing biometric personal information.

The consultation paper recommends limitations on collecting biometric information on emotional states, physical states, or certain demographics (such as gender and race). The code would allow, for example, the collection of fingerprints by employers to identify employees, age verification for restricted content online, and for financial institutions to use facial recognition for anti-fraud purposes.

This round of consultation recommends changes around consent. This recognizes the practicalities of getting meaningful consent, particularly when the specific implementation of the technology makes it difficult (e.g. biometric collection at a distance). The consultation also notes importantly that consumers often do not recognize the implications of consent agreements.

Notably, the consultation considers data sovereignty for the Māori. Indigenous rights in data and AI is an important area of concern for respect for human rights in AI policy, which is often not stressed on the global stage. The consultation acknowledges that “biometric technologies can exacerbate and perpetuate bias and negative profiling of Māori.”

Hong Kong’s privacy regulator leads AI governance

The Office of the Privacy Commissioner for Personal Data (PCPD) last year conducted compliance checks on 28 local organizations using personal data in AI systems in order to better understand their practices and build appropriate policies. PDPC Commissioner Ada Chung said, “While AI has immense potential for driving productivity and economic growth, it also poses varying degrees of personal data privacy and ethical risks.”

In its investigation, the PCPD found that of 21 organizations that used AI in their day-to-day operations, 19 of them had established governance frameworks, such as appointing a designated officer to oversee the development of AI products and services. Of the organizations using AI that used personal data, only 80% had conducted privacy impact assessments, but all had implemented appropriate security measures to ensure that data was protected against unauthorized or accidental access.

The PCPD developed “Guidance on the Ethical Development and Use of Artificial Intelligence” in 2021. This guide provides a checklist for organizations to ensure that the guidance provided has been successfully incorporated into the appropriate business processes.

The guide goes beyond privacy-related issues, and demonstrates the leading role that privacy authorities have in encouraging ethical AI development in Hong Kong. For example, the guide lists seven ethical AI principles businesses should follow. In addition to privacy, the others are accountability, human oversight, transparency, beneficial use, reliability, and fairness.

Share excellence in AI policy

The experiences of privacy regulators from South Korea, Australia, New Zealand, Singapore, and Hong Kong provide an opportunity for the region, in particular jurisdictions with no privacy rules or new ones, to leap-frog into advanced regulation of AI.

The Asia-Pacific region should leverage its diversity and promote best practices in AI policy as demonstrated by the experience of privacy regulators, such as surveying and consulting with the public on AI policy, crafting policy based on actual business practices and not hypothetical activity, or taking seriously the concerns of marginal or disadvantaged groups. Establishing an Asia-based Center of Excellence on AI Policy could be an effective means to bring together government officials, academics, industry, and civil society to ensure that Asia’s experience impacts AI policy locally and globally.

Authors

Seth Hays
Seth Hays is Managing Director and Co-founder of APAC GATES, a non-profit management and rights advocacy consultancy based in Taipei. Seth has worked in the public interest and non-profit sector in Asia for over two decades, with extensive government affairs experience, including advocacy for consum...

Topics