Generic placeholder image

Automated Decision Systems, Algorithmic Accountability & The Law

Miami - November 14, 2020

Automated Decision Systems are computational processes that facilitate/make statistics and AI-based decisions impacting consumers or end-users. Algorithmic decision-making is often both opaque and complex (e.g. machine learning including predictive modelling). The Absence/lack of transparency regarding unfair/biased and damaging algorithmic decision-making (e.g. discriminatory/predictive profiling) may prevent aggrieved individuals from obtaining an effective legal remedy.

“Making algorithms more accountable means ensuring that harms can be assessed, controlled and redressed. Ensuring algorithmic justice implies finding the right remedies and identifying the responsible parties to take action.” - Web Foundation

Law and Policy Makers’ approach to develop a legal framework establishing Comprehensive Algorithmic Accountability should:

(i) be centered around individuals’ right (some consider it a Human Right) not to be subject to decisions:

  • made by Algorithmic Automated Decision Systems without their views being taken into account, and
  • which significantly affect them;

(ii) require that Algorithmic Automated Decision Systems be designed and implemented in publicly accountable ways to mitigate harmful impacts on consumers and society.

The Fairness, Accountability and Transparency in Machine Learning Group proposes 5 principles for Algorithmic Accountability:

- Fairness (no discrimination/unjust impacts across various demographics),

- Explainability (in non-technical terms): Right to Explanation (e.g. Recital 71 of the EU GDPR and French Digital Republic Act 2016),

- Auditability (reviewability of algorithm by professional 3rd parties),

- Responsibility (if negative effects suffered by end-users due to algorithm’s decision: redress made available to end-users by algorithm’s owner/operator/designer),

- Accuracy (identify sources of error, unfairness and law violation).

The U.S. Federal Congress is in the process of enacting an AI Regulation Bill called “Algorithmic Accountability Act” of 2019.

Algorithmic Accountability/Transparency concepts (coined in 2016 by Diakopoulos and Koliska) require that the factors that influence decisions made by algorithms must be visible to the people who use systems which employ those algorithms. Algorithmic Accountability means that the entities using algorithms must be accountable for decisions made by those algorithms, even if the decisions are made by a machine. Algorithmic Transparency states that the inputs to the algorithm and the algorithm's use itself must be known, but they do not need to be fair.

The California Consumer Privacy Act includes no provisions about Algorithmic Accountability/Transparency. The E.U. GDPR barely covers the topic by granting individuals a vague Right to Explanation if they are subject to decisions based on automated processing only (Recital 71).

The Algorithmic Accountability Act defines Automated Decision System as a computational process, derived from statistics/AI techniques, that facilitates or makes decisions and impacts consumers. If enacted as is, this Bill will make a limited application of Algorithmic Accountability/Transparency concepts. Indeed, it currently only requires from large entities (not small ones) to conduct impact assessments of High-Risk Automated Decision Systems which:

1) pose a significant risk (i) to privacy/security of consumers’ personal information (PI), or (ii) of resulting in inaccurate/unfair/biased/discriminatory decisions impacting consumers;

2) facilitate/make decisions based on systematic/extensive/predictive evaluations of consumers (i.e. work performance, economic situation, health, personal preferences, interests, behavior, location/movements) that significantly impact them or alter their legal rights;

3) involve consumers’ PI (i.e. ethnicity, color, national origin, political opinions, religion, trade union membership, genetic/biometric data, health, gender, gender identity, sexuality, sexual orientation, criminal convictions or arrests.

As the Bill is currently drafted, “companies would be required to reasonably address concerns these assessments identify, but companies would not be required to disclose these impact assessments. However, failure to comply would be considered an unfair or deceptive act under the Federal Trade Commission Act and thus subject to regulatory action.” Joshua New (Technology Policy Executive at IBM).

Dr. Ariel Humphrey