Slave to the algorithm? Why a 'right to an explanation' is probably not the remedy you are looking for

Lilian Edwards, Michael Veale

Research output: Contribution to journalArticle

Abstract

Algorithms, particularly of the machine learning (ML) variety, are increasingly important to individuals' lives, but have caused a range of concerns evolving mainly around unfairness, discrimination and opacity. Transparency in the form of a "right to an explanation" has emerged as a compellingly attractive remedy since it intuitively presents as a means to "open the black box", hence allowing individual challenge and redress, as well as potential to instil accountability to the public in ML systems. In the general furore over algorithmic bias and other issues laid out in section 2, any remedy in a storm has looked attractive.
However, we argue that a right to an explanation in the GDPR is unlikely to be a complete remedy to algorithmic harms, particularly in some of the core "algorithmic war stories" that have shaped recent attitudes in this domain. We present several reasons for this conclusion. First (section 3), the law is restrictive on when any explanation-related right can be triggered, and in many places is unclear, or even seems paradoxical. Second (section 4), even were some of these restrictions to be navigated, the way that explanations are conceived of legally — as "meaningful information about the logic of processing" — is unlikely to be provided by the kind of ML "explanations" computer scientists have been developing. ML explanations are restricted both by the type of explanation sought, the multi-dimensionality of the domain and the type of user seeking an explanation. However “subject-centric" explanations (SCEs), which restrict explanations to particular regions of a model around a query, show promise for interactive exploration, as do pedagogical rather than decompositional explanations in dodging developers' worries of IP or trade secrets disclosure.
As an interim conclusion then, while convinced that recent research in ML explanations shows promise, we fear that the search for a "right to an explanation" in the GDPR may be at best distracting, and at worst nurture a new kind of "transparency fallacy". However, in our final sections, we argue that other parts of the GDPR related (i) to other individual rights including the right to erasure ("right to be forgotten") and the right to data portability and (ii) to privacy by design, Data Protection Impact Assessments and certification and privacy seals, may have the seeds we can use to build a more responsible, explicable and user-friendly algorithmic society.
LanguageEnglish
Pages1-65
Number of pages65
JournalDuke Law and Technology Review
Volume16
Issue number1
StatePublished - 4 Dec 2017

Fingerprint

slave
remedies
learning
transparency
privacy
computer scientist
data protection
certification
discrimination
anxiety

Keywords

  • privacy law
  • machine learning
  • algorithms
  • data protection
  • privacy

Cite this

@article{eac1e87db60640598f72a9a5fc4b1efa,
title = "Slave to the algorithm? Why a 'right to an explanation' is probably not the remedy you are looking for",
abstract = "Algorithms, particularly of the machine learning (ML) variety, are increasingly important to individuals' lives, but have caused a range of concerns evolving mainly around unfairness, discrimination and opacity. Transparency in the form of a {"}right to an explanation{"} has emerged as a compellingly attractive remedy since it intuitively presents as a means to {"}open the black box{"}, hence allowing individual challenge and redress, as well as potential to instil accountability to the public in ML systems. In the general furore over algorithmic bias and other issues laid out in section 2, any remedy in a storm has looked attractive.However, we argue that a right to an explanation in the GDPR is unlikely to be a complete remedy to algorithmic harms, particularly in some of the core {"}algorithmic war stories{"} that have shaped recent attitudes in this domain. We present several reasons for this conclusion. First (section 3), the law is restrictive on when any explanation-related right can be triggered, and in many places is unclear, or even seems paradoxical. Second (section 4), even were some of these restrictions to be navigated, the way that explanations are conceived of legally — as {"}meaningful information about the logic of processing{"} — is unlikely to be provided by the kind of ML {"}explanations{"} computer scientists have been developing. ML explanations are restricted both by the type of explanation sought, the multi-dimensionality of the domain and the type of user seeking an explanation. However “subject-centric{"} explanations (SCEs), which restrict explanations to particular regions of a model around a query, show promise for interactive exploration, as do pedagogical rather than decompositional explanations in dodging developers' worries of IP or trade secrets disclosure.As an interim conclusion then, while convinced that recent research in ML explanations shows promise, we fear that the search for a {"}right to an explanation{"} in the GDPR may be at best distracting, and at worst nurture a new kind of {"}transparency fallacy{"}. However, in our final sections, we argue that other parts of the GDPR related (i) to other individual rights including the right to erasure ({"}right to be forgotten{"}) and the right to data portability and (ii) to privacy by design, Data Protection Impact Assessments and certification and privacy seals, may have the seeds we can use to build a more responsible, explicable and user-friendly algorithmic society.",
keywords = "privacy law, machine learning, algorithms, data protection, privacy",
author = "Lilian Edwards and Michael Veale",
year = "2017",
month = "12",
day = "4",
language = "English",
volume = "16",
pages = "1--65",
journal = "Duke Law and Technology Review",
issn = "2328-9600",
publisher = "Duke University School of Law",
number = "1",

}

Slave to the algorithm? Why a 'right to an explanation' is probably not the remedy you are looking for. / Edwards, Lilian; Veale, Michael.

In: Duke Law and Technology Review, Vol. 16, No. 1, 04.12.2017, p. 1-65.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Slave to the algorithm? Why a 'right to an explanation' is probably not the remedy you are looking for

AU - Edwards,Lilian

AU - Veale,Michael

PY - 2017/12/4

Y1 - 2017/12/4

N2 - Algorithms, particularly of the machine learning (ML) variety, are increasingly important to individuals' lives, but have caused a range of concerns evolving mainly around unfairness, discrimination and opacity. Transparency in the form of a "right to an explanation" has emerged as a compellingly attractive remedy since it intuitively presents as a means to "open the black box", hence allowing individual challenge and redress, as well as potential to instil accountability to the public in ML systems. In the general furore over algorithmic bias and other issues laid out in section 2, any remedy in a storm has looked attractive.However, we argue that a right to an explanation in the GDPR is unlikely to be a complete remedy to algorithmic harms, particularly in some of the core "algorithmic war stories" that have shaped recent attitudes in this domain. We present several reasons for this conclusion. First (section 3), the law is restrictive on when any explanation-related right can be triggered, and in many places is unclear, or even seems paradoxical. Second (section 4), even were some of these restrictions to be navigated, the way that explanations are conceived of legally — as "meaningful information about the logic of processing" — is unlikely to be provided by the kind of ML "explanations" computer scientists have been developing. ML explanations are restricted both by the type of explanation sought, the multi-dimensionality of the domain and the type of user seeking an explanation. However “subject-centric" explanations (SCEs), which restrict explanations to particular regions of a model around a query, show promise for interactive exploration, as do pedagogical rather than decompositional explanations in dodging developers' worries of IP or trade secrets disclosure.As an interim conclusion then, while convinced that recent research in ML explanations shows promise, we fear that the search for a "right to an explanation" in the GDPR may be at best distracting, and at worst nurture a new kind of "transparency fallacy". However, in our final sections, we argue that other parts of the GDPR related (i) to other individual rights including the right to erasure ("right to be forgotten") and the right to data portability and (ii) to privacy by design, Data Protection Impact Assessments and certification and privacy seals, may have the seeds we can use to build a more responsible, explicable and user-friendly algorithmic society.

AB - Algorithms, particularly of the machine learning (ML) variety, are increasingly important to individuals' lives, but have caused a range of concerns evolving mainly around unfairness, discrimination and opacity. Transparency in the form of a "right to an explanation" has emerged as a compellingly attractive remedy since it intuitively presents as a means to "open the black box", hence allowing individual challenge and redress, as well as potential to instil accountability to the public in ML systems. In the general furore over algorithmic bias and other issues laid out in section 2, any remedy in a storm has looked attractive.However, we argue that a right to an explanation in the GDPR is unlikely to be a complete remedy to algorithmic harms, particularly in some of the core "algorithmic war stories" that have shaped recent attitudes in this domain. We present several reasons for this conclusion. First (section 3), the law is restrictive on when any explanation-related right can be triggered, and in many places is unclear, or even seems paradoxical. Second (section 4), even were some of these restrictions to be navigated, the way that explanations are conceived of legally — as "meaningful information about the logic of processing" — is unlikely to be provided by the kind of ML "explanations" computer scientists have been developing. ML explanations are restricted both by the type of explanation sought, the multi-dimensionality of the domain and the type of user seeking an explanation. However “subject-centric" explanations (SCEs), which restrict explanations to particular regions of a model around a query, show promise for interactive exploration, as do pedagogical rather than decompositional explanations in dodging developers' worries of IP or trade secrets disclosure.As an interim conclusion then, while convinced that recent research in ML explanations shows promise, we fear that the search for a "right to an explanation" in the GDPR may be at best distracting, and at worst nurture a new kind of "transparency fallacy". However, in our final sections, we argue that other parts of the GDPR related (i) to other individual rights including the right to erasure ("right to be forgotten") and the right to data portability and (ii) to privacy by design, Data Protection Impact Assessments and certification and privacy seals, may have the seeds we can use to build a more responsible, explicable and user-friendly algorithmic society.

KW - privacy law

KW - machine learning

KW - algorithms

KW - data protection

KW - privacy

UR - https://dltr.law.duke.edu/2017/12/04/slave-to-the-algorithm-why-a-right-to-an-explanation-is-probably-not-the-remedy-you-are-looking-for/

M3 - Article

VL - 16

SP - 1

EP - 65

JO - Duke Law and Technology Review

T2 - Duke Law and Technology Review

JF - Duke Law and Technology Review

SN - 2328-9600

IS - 1

ER -