Kenya Must Update its Regulatory Frameworks to Keep Pace with AI
Bulelani Jili / Oct 2, 2023Bulelani Jili is a Meta Research Ph.D. fellow at Harvard University.
Kenya’s existing regulatory instruments are insufficient to address the emerging challenges brought by the adoption of AI systems. Accordingly, the speed of both the development and adoption of AI may outpace Kenya’s current regulatory frameworks. Fixing this does not necessarily require new legislation, but it will require more detailed analysis and assessments by government authorities of how existing laws could apply to AI-related issues. All stakeholders, including private firms and government officials, should work with the Kenyan government to design a balanced and effective legal and regulatory framework that addresses human rights concerns and protects civil liberties, while still supporting innovation.
Overview of applicable laws
Like most countries, Kenya does not have a stand-alone national AI strategy or regulatory framework. Kenya relies on several existing laws to address issues related to AI and digital technologies. These laws include the Data Protection Act (DPA) of 2019, which confers a framework for data protection in Kenya. The DPA is foundational, given that some AI systems rely on collecting and processing large amounts of personal data. For example, Section 35 of the DPA defines automated decision-making as the "ability to make decisions by technological means without human involvement." It also explains consumers’ rights to refuse to be subjected to harm caused by automated decisions.
Several other provisions in the DPA also seek to safeguard individuals against potentially harmful data processing practices and uses, such as Section 30 (1), which maintains that data controllers or data processors should not process personal data unless the person has consented to the processing. AI operators (those using AI to provide a service) must then take this into account to ensure that automated decisions do not negatively impact the rights of users.
Other provisions of the DPA that are relevant to AI operators include Section 28, which reiterates the need for data not to be misused and, moreover, must be used for legally permissible reasons. Section 31 further spells out that if the processing operations are likely to result in a risk to the rights of the subject, "by virtue of its nature, scope, context, and purposes," the data controller or data processor must carry out a data protection impact assessment. In 2020, Kenya’s High Court halted the government’s attempt to roll out the National Integrated Identity Management System (NIIMS), also known as Huduma Namba (Swahili for service number,) citing Section 31.
This national database contains personal information about Kenyan citizens and foreign residents and is critical for accessing public services, yet the government had no clear safeguards in place to guarantee the security and safety of biometric data. Given the sensitive nature of the data, the High Court halted it on the grounds that the government needed to conduct a data protection impact assessment in accordance with Section 31 of the Data Protection Act.
Similarly, regulatory requirements that have consequences for AI operations include Kenya’s Computer Misuse and Cybercrimes Act of 2018, which provides a framework to deal with offenses related to digital platforms. It enables "timely and effective detection, prohibition, prevention, response, investigation, and prosecution of computer and cybercrimes; to facilitate international cooperation in dealing with computer and cybercrime matters; and for connected purposes." As the executionary body, the National Computer and Cybercrimes Co-ordination Committee (NCCCC) is responsible for security-related challenges to critical infrastructure, digital platforms, and mobile money transfers. AI operators need to know, for example, how their products may result in false publication of data, cyber harassment, or unauthorized use of electronic data.
Improving and Updating Kenya’s AI Framework
To move AI policy forward in Kenya, policymakers need to conduct impact assessments on the newly introduced tools and shore up existing laws. For example, a major public concern for AI is the adoption of facial recognition systems. Kenya has deployed and used facial recognition on ostensibly permissible grounds to address major concerns with crime and terrorism. Yet, no evidence suggests that these systems have curbed crime. While there are several laws that are relevant to AI systems in Kenya, they do not directly address the challenges of the misuse of facial recognition systems. Currently, there are no policies or laws that seek to manage the adoption and use of facial recognition or even analog CCTV systems more generally.
Kenya’s broad adoption of facial recognition lacks a regulatory framework to ensure that the AI tools are accurate and do not negate civil rights. In response, Kenya should carry out impact assessments of these tools to ensure the government's and private sector’s use of facial recognition protects people’s privacy. For example, the United States National Institute for Standards and Technology (NIST) tests and evaluates facial recognition technologies. NIST testing programs are open to any firm worldwide, so there’s nothing to stop Kenya from requiring that service providers be tested at NIST or other similar bodies.
Moreover, there are no comprehensive safeguards against unwarranted surveillance, including the misuse of facial recognition systems. The 2014 Security Law consolidates and expands law enforcement’s power to leverage digital tools to enhance their "ability to detect, monitor, and eliminate security threats." Accordingly, the far-reaching security bill and the absence of safeguards against unwarranted surveillance systems engender a circumstance that threatens civil liberties.
Addressing the above gaps
In response, political leaders need to pursue legislative instruments that can establish accountability for the use of surveillance systems. This may mean elaborating on existing laws to include AI and other emerging technologies or crafting a new framework tailored to meet some of the challenges of digital surveillance systems. More to the point, the absence of an explicit framework raises challenges for the responsible use of AI. Expanding on current legal arrangements or formulating regulatory instruments for AI and other emerging technologies is a way to further identify and, correspondingly, yield a solution to burgeoning opportunities and trepidations.
To address AI more broadly, Kenya’s government can also develop regulatory guidance to elaborate on how government agencies interpret and use existing laws to address issues raised by AI and other emerging technologies. This would improve the relevance of Kenya’s existing laws and regulations. For example, Kenya’s Distributed Ledger Technology and Artificial Intelligence Taskforce report provides the government with a strategic course on how to develop a roadmap to uphold human rights when adopting emerging technologies like AI. The report recommends how to leverage blockchain and AI to combat corruption and augment state transparency.
In addition, policymakers can learn from the high court decision on the National Integrated Identity Management System (NIIMS) and ensure the system does not deprive groups of essential services, especially historically vulnerable communities. The legal judgment, along with guidance from government agencies, points to how Kenya’s laws seek to promote the rights of citizens by safeguarding against unfair and unlawful data processing practices. Accordingly, AI practitioners must then consider how some of these provisions apply to them
If Kenya wants to leverage the next generation of ICT systems for development, it needs to ensure its approach to AI and other emerging technologies is both carefully thought out and inclusive. A balanced approach to AI depends on nuanced analysis and appropriate legal and regulatory scaffolding. Getting the most out of AI also depends on engagement between All of Kenya’s stakeholders—the government, the business community, human rights organizations, and others—need to be involved to ensure local laws and regulations both support the benefits of AI while addressing harms in a targeted and proportionate manner.