Home

Navigating the AI Horizon: A Journey Towards Trustworthy Adoption

Jameela Sahiba / Mar 8, 2024

The rapid proliferation of AI technologies has raised concerns about data privacy, user safety, and the potential displacement of jobs. A new global poll from the public relations firm Edelman finds that since 2019, public trust in AI companies has declined 8%, from 61% to 53%. A substantial number of respondents are concerned that AI may ultimately devalue what it means to be human.

As AI is deployed in so many products and services important to daily life, responsible governance emerges as a critical need to balance its unprecedented potential with the multifaceted challenges and risks it presents. Building trust in AI systems emerges as the linchpin for achieving this delicate balance. Trustworthy AI is not just about legal compliance and robust functionality; it extends to addressing complex ethical questions. Further, trust in AI can only be achieved when the function of the technology and outcomes associated with it aligns closely with overarching ethical principles, fostering confidence among users. Such trust forms the bedrock for flourishing of societies, economies, and sustainable development.

Against this backdrop, The Dialogue undertook extensive research to develop the report “Towards Trustworthy AI: Sectoral Guidelines for Responsible Adoption.” The research process was driven by the recognition that trust in AI is not only pivotal but foundational for its successful integration into society. We conducted a meticulous scan of current AI regulations, from which we distill nine universal principles essential for fostering trustworthiness through transparency and explainability, accountability, fairness and non-discrimination, reliability and safety, human autonomy and determination, privacy and data protection, social and environmental sustainability, governance and oversight, and contestability. Together, these principles are aimed to function as an inclusive guide, steering discussions on cultivating trust and confidence in the utilization of AI.

Moving beyond theoretical considerations, we delved into the practical realm by operationalizing the identified principles across two critical sectors, finance and healthcare. Trust and responsibility are central tenets in these sectors due to the sensitive nature of financial transactions and the critical importance of AI-enabled decisions in patient care. We set out to address the intricacies specific to these sectors while also deriving insights that can inform trustworthy AI practices across diverse industries. To achieve this goal, we propose targeted strategies designed to be used by diverse stakeholder groups, including developers, deployers, and end-users. These strategies are tailored to cater to both technical and non-technical aspects, and are tuned to the unique applications of AI within the finance and health sectors.

For example, to operationalize the principle of transparency and explainability in the finance sector, tools such as documentation, audits, and model selection are necessary on the technical side, while non-technical strategies include regulatory compliance, the establishment of ethical AI committees, and end-user education. These guidelines are intended to be integrated across the entire AI lifecycle, addressing the needs of developers, deployers, and end-users.

Similarly, for the operationalization of the same principle within the health sector, technical tools must include interpretable systems, third-party audits, and the development of user-friendly interfaces. Complementing these on the non-technical side are recommendations like collaboration with experts, informed consent practices, and the training of healthcare professionals and students. It is important to note here that some of the tools and strategies proposed for implementing one principle can also be effectively applied to operationalize another. This indicates that certain tools and strategies can simultaneously operationalize multiple trustworthy AI principles. By offering concrete, sector-specific guidance, our aim is to bridge the gap between theoretical principles and actionable steps, promoting an environment where AI can be developed and deployed in alignment with broader societal goals. 

We conclude by exploring the three pillars necessary to implement our principle-based framework: domestic coordination, international cooperation, and public-private collaboration. Effective AI governance requires domestic regulations to align with existing sectoral rules, adapting to a dynamic AI landscape. International cooperation is imperative for harmonizing regulations across borders, necessitating a convergence of shared principles. Public-private partnerships, leveraging market mechanisms, play a crucial role in promoting responsible AI integration.

Deploying AI responsibly requires us to navigate its complexities methodically, ensuring that its potential is harnessed ethically and responsibly. By fostering trustworthiness in AI, we can not only mitigate unintended consequences but also set the stage for a future where these powerful technologies support societal values and trust. We hope our effort will help usher in a future where AI technologies serve as a force for economic growth and positive change, one that is responsible, ethical, and aligned with the best interests of society.

The aforementioned research paper is the knowledge product of The Dialogue and is authored by Rama Veda Shree, Jameela Sahiba, Bhoomika Agarwal, and Kamesh Shekar.

Authors

Jameela Sahiba
Jameela Sahiba is a lawyer by training and currently works as the Senior Programme Manager for the Artificial Intelligence Vertical at The Dialogue. The Dialogue is a technology policy think-tank based out of New Delhi that works at the intersection of technology, law and society.

Topics