Algorithmic oppression with Chinese characteristics: AI against Xinjiang’s Uyghurs

Research output: Chapter in Book/Report/Conference proceedingOther chapter contribution

3 Downloads (Pure)

Abstract

The ways in which artificial intelligence (AI), in particular facial recognition technology, is being used by the Chinese state against the Uyghur ethnic minority demonstrate how big data gathering, analysis and AI have become ubiquitous surveillance mechanisms in China. These actual uses of facial recognition will be compared with the rhetoric on AI ethics which is beginning to emerge from public and private actors in China. Implications include the mismatch between rhetoric and practice with regards to AI in China; a more global understanding of algorithmic discrimination, which in China explicitly targets and categorises Uyghur people and other ethnic minorities; and a greater awareness of AI technologies developed and used in China which may then be exported to other states, including supposed liberal democracies, and used in similar ways.
Original languageEnglish
Title of host publicationArtificial intelligence: Human rights, social justice and development
Subtitle of host publicationGlobal Information Society Watch 2019 Report
Pages108-112
Number of pages5
Publication statusPublished - 2019

    Fingerprint

Keywords

  • China
  • artificial intelligence (AI)
  • oppression
  • Uyghurs
  • facial recognition technology
  • algorithmic discrimination

Cite this

Daly, A. (2019). Algorithmic oppression with Chinese characteristics: AI against Xinjiang’s Uyghurs. In Artificial intelligence: Human rights, social justice and development: Global Information Society Watch 2019 Report (pp. 108-112)