AI & Forensic Investigation: Lessons from algorithmic DNA analysis

  • Reaney, L. (Participant)
  • Karen Richmond (Participant)

Activity: Participating in or organising an event typesParticipation in workshop, seminar, course

Description

Challenges relating to the standardisation, and regulation, of novel forms of ‘machine learning’ and Artificial Intelligence (AI), continue to receive a significant level of attention from academics, and associated institutional agents (Fomin: 2019): not least due to the tendency for this emergent field of data science to be viewed as posing completely novel problems for professionals, and the public, alike. However, commentators have largely overlooked the fact that the seemingly intractable challenges associated with new uses of AI are relatively familiar, particularly within the forensic science and legal fields. Indeed, it is the experience of standardising and regulating algorithms within the criminal justice system, which may help us to set the most rigorous and widespread standards in respect of this latest wave of 'disruptive’ technologies (Mitchell: 2010).

Arguably, the challenges associated with the use of forensic AI are an extension of those encountered by the courts in relation to the use of algorithmic DNA analysis software. In retrospect, we can see how the introduction of computer-driven probabilistic genotyping methods in DNA mixtures analysis (around 2010) initially appeared to solve some of the issues relating to the increased sensitivity of DNA profiling methods. However, the evidence derived from these techniques soon ran into challenges from defence teams, who expressed concerns relating to both the absence of validation, and lack of transparency. This necessitated the introduction of novel procedures, and validation protocols (Haned, et al.: 2016).

Problems relating to validation and transparency re-emerge with the implementation of AI within the forensic field. Indeed, such problems become fundamental, particularly in relation to the deployment of ‘opaque AI’. The opaque variant manipulates algorithms in order to learn, through a process of trial-and-error, gradually becoming more efficient. However, the process of manipulation and change occurs beyond the threshold of human perception. When such technologies are introduced into the forensic sphere, they may presents seemingly dissoluble problems, since transparency is central to the testing of new technologies in the courtroom.

This presentation - based upon a work-in-progress - attempts to address the transparency problem through a comprehensive assessment of the current challenges facing the utilisation of forensic AI. It attempts to posit solutions, whilst considering the ways in which we might re-conceptualise these new opaque technologies (Harman, G: 2009)
Period28 Feb 2020
Event typeWorkshop

Keywords

  • Research
  • Networking