How to Prevent Discriminatory Outcomes in Machine Learning

Human Rights – Machine Learning

How to Prevent Discriminatory Outcomes in Machine Learning
World Economic Forum
Global Future Council on Human Rights 2016-2018
March 2018 :: 30 pages
PDF: http://www3.weforum.org/docs/WEF_40065_White_Paper_How_to_Prevent_Discriminatory_Outcomes_in_Machine_Learning.pdf
Abstract
Machine learning applications are already being used to make many life-changing decisions – such as who qualifies for a loan, and whether someone is released from prison. A new model is needed to govern how those developing and deploying machine learning can address the human rights implications of their products. This paper offers comprehensive recommendations on ways to integrate principles of non-discrimination and empathy into machine learning systems.
This White Paper was written as part of the ongoing work by the Global Future Council on Human Rights; a group of leading academic, civil society and industry experts providing thought leadership on the most critical issues shaping the future of human rights.

Excerpt from Executive Summary
…The challenges
While algorithmic decision-making aids have been used for decades, machine learning is posing new challenges due to its greater complexity, opaqueness, ubiquity, and exclusiveness.

Some challenges are related to the data used by machine learning systems. The large datasets needed to train these systems are expensive either to collect or purchase, which effectively excludes many companies, public and civil society bodies from the machine learning market. Training data may exclude classes of individual who do not generate much data, such as those living in rural areas of low-income countries, or those who have opted out of sharing their data. Data may be biased or error-ridden.

Even if machine learning algorithms are trained on good data sets, their design or deployment could encode discrimination in other ways: choosing the wrong model (or the wrong data); building a model with inadvertently discriminatory features; absence of human oversight and involvement; unpredictable and inscrutable systems; or unchecked and intentional discrimination.

There are already examples of systems that disproportionately identify people of color as being at “higher risk” for committing a crime, or systematically exclude people with mental disabilities from being hired. Risks are especially high in low- and middle-income countries, where existing inequalities are often deeper, training data are less available, and government regulation and oversight are weaker.

While ML has implications for many human rights, not least the right to privacy, we focus on discrimination because of the growing evidence of its salience to a wide range of private-sector entities globally, including those involved in data collection or algorithm design or who employ ML systems developed by a third party. The principle of non-discrimination is critical to all human rights, whether civil and political, like the rights to privacy and freedom of expression, or economic and social, like the rights to adequate health and housing.

Drawing on existing work, we propose four central principles to combat bias in machine learning and uphold human rights and dignity:
– Active Inclusion: The development and design of ML applications must actively seek a diversity of input, especially of the norms and values of specific populations affected by the output of AI systems.
– Fairness: People involved in conceptualizing, developing, and implementing machine learning systems should consider which definition of fairness best applies to their context and application, and prioritize it in the architecture of the machine learning system and its evaluation metrics.
– Right to Understanding: Involvement of ML systems in decision-making that affects individual rights must be disclosed, and the systems must be able to provide an explanation of their decision-making that is understandable to end users and reviewable by a competent human authority. Where this is impossible and rights are at stake, leaders in the design, deployment and regulation of ML technology must question whether or not it should be used.
– Access to Redress: Leaders, designers and developers of ML systems are responsible for identifying the potential negative human rights impacts of their systems. They must make visible avenues for redress for those affected by disparate impacts, and establish processes for the timely redress of any discriminatory outputs.

We recommend three steps for companies:
1.Identifying human rights risks linked to business operations. We propose that common standards for assessing the adequacy of training data and its potential bias be established and adopted, through a multi-stakeholder approach.
2. Taking effective action to prevent and mitigate risks. We propose that companies work on concrete ways to enhance company governance, establishing or augmenting existing mechanisms and models for ethical compliance.
3. Being transparent about efforts to identify, prevent, and mitigate human rights risks. We propose that companies monitor their machine learning applications and report findings, working with certified third-party auditing bodies in ways analogous to industries such as rare mineral extraction. Large multinational companies should set an example by taking the lead. Results of audits should be made public, together with responses from the company…

::::::

WEC – Global Future Council on Human Rights
Co-Chairs
Erica Kochi
Michael H. Posner

Members
Dapo Akande
Anne-Marie Allgrove
Michelle Arevalo-Carpenter
Daniel Bross
Amal Clooney
Steven Crown
Eileen Donahoe
Sherif Elsayed-Ali
Isabelle Falque-Pierrotin
Damiano de Felice
Samuel Gregory
Miles Jackson
May-Ann Lim
Katherine Maher
Marcela Manubens
Andrew McLaughlin
Mayur Patel
Esra’a Al Al Shafei
Hilary Sutcliffe
Manuela M. Veloso