Human Rights Based Approach to AI

It is well recognised that development of AI has been largely market-driven to date. However, government policy will have a crucial role to provide a direction for industry and to deliver on the ambitions. How AI will unfold in the future remains unknown. There seems to be consensus that in the long term, the impacts of AI could be profound. In addition, the challenges for assessing the social, economic and environmental impacts of AI make it difficult to develop effective AI policies. Hence, it’s imperative that a human rights-based approach to AI needs to be adopted. Companies should orient their standards, rules and system design around universal human rights principles. Public-facing terms and guidelines should be complemented by internal policy commitments to mainstreaming human rights considerations throughout a company’s operations, especially in relation to the development and deployment of AI and algorithmic systems. Companies should consider how to elaborate professional standards for AI engineers, translating human rights responsibilities into guidance for technical design and operation choices. The development of codes of ethics and accompanying institutional structures may be an important complement to, but not a substitute for, commitments to human rights. Codes and guidelines issued by both public and private sector bodies should emphasize that human rights law provides the fundamental rules for the protection of individuals in the context of AI, while ethics frameworks may assist in further developing the content and application of human rights in specific circumstances.

Companies and Governments must be explicit with individuals about which decisions in the information environment are made by automated systems and which are accompanied by human review, as well as the broad elements of the logic used by those systems. Individuals should also be informed when the personal data they provide to a private sector actor (either explicitly or through their use of a service or site) will become part of a dataset used by an AI system, to enable them to factor that knowledge into their decision about whether to consent to data collection and which types of data they wish to disclose. Similarly, to the public notices required

 for the use of closed-circuit television cameras, AI systems should actively disclose to individuals (through innovative means such as pop-up boxes) in a clear and understandable manner that they are subject or contributing data to an AI-driven decision-making process, as well as meaningful information about the logic involved in the process and the significance of the consequences to the individual.

Transparency does not stop with the disclosure to individual users about the existence of AI technologies in the platforms and online services they use. Companies and Governments need to embrace transparency throughout each aspect of the AI value chain. Transparency need not be complex to be effective; even simplified explanations of the purpose, policies, inputs and outputs of an AI system can contribute to public education and debate. Rather than grapple with the predicament of making convoluted technical processes legible to lay audiences, companies should strive to achieve transparency through the provision of non-technical insights into a system. To that end, the focus should be on educating individual users about an AI system’s existence, purpose, constitution and impact, rather than about the source code, training data and inputs and outputs.

Tackling the prevalence of discrimination in AI systems is an existential challenge for companies and Governments; failure to address and resolve the discriminatory elements and impacts will render the technology not only ineffective but dangerous. There is ample thought leadership and resources for companies and Governments to draw on in considering how to address bias and discrimination in AI systems; broadly speaking, it necessitates isolating and accounting for discrimination at both the input and output levels. This involves, at a minimum, addressing sampling errors (where datasets are non-representative of society), scrubbing datasets to remove discriminatory data and putting in place measures to compensate for data that “contain the imprint of historical and structural patterns of discrimination” and from which AI systems are likely to develop discriminatory proxies. Active monitoring of discriminatory outcomes of AI systems is also integral to avoiding and mitigating adverse effects on the human rights of individuals.