Enslaving the algorithm: from a 'right to an explanation' to a 'right to better decisions'?

Lilian Edwards, Michael Veale

Research output: Contribution to journalArticlepeer-review

69 Citations (Scopus)
934 Downloads (Pure)

Abstract

As concerns about unfairness and discrimination in “black box” machine learning systems rise, a legal “right to an explanation” has emerged as a compellingly attractive approach for challenge and redress. We outline recent debates on the limited provisions in European data protection law, and introduce and analyse newer explanation rights in French administrative law and the draft modernised Council of Europe Convention 108. While individual rights can be useful, in privacy law they have historically unreasonably burdened the average data subject. “Meaningful information” about algorithmic logics is more technically possible than commonly thought, but this exacerbates a new “transparency fallacy”—an illusion of remedy rather than anything substantively helpful. While rights-based approaches deserve a firm place in the toolbox, other forms of governance, such as impact assessments, “soft law”, judicial review and model
repositories deserve more attention, alongside catalysing agencies acting for users to control algorithmic system design.
Original languageEnglish
Pages (from-to)46-54
Number of pages9
JournalIEEE Security and Privacy Magazine
Volume16
Issue number3
Early online date25 Jun 2018
DOIs
Publication statusE-pub ahead of print - 25 Jun 2018

Keywords

  • algorithmic decision making
  • algorithmic logic
  • systems design
  • legal rights
  • data protection
  • privacy law

Fingerprint

Dive into the research topics of 'Enslaving the algorithm: from a 'right to an explanation' to a 'right to better decisions'?'. Together they form a unique fingerprint.

Cite this