Best practices for authors of healthcare-related artificial intelligence manuscripts

Sujay Kakarmath, Andre Esteva, Rima Arnaout, Hugh Harvey, Santosh Kumar, Evan Muse, Feng Dong, Leia Wedlund, Joseph Kvedar

Research output: Contribution to journalEditorialpeer-review

Abstract

Abstract: Since its inception in 2017, npj Digital Medicine has attracted a disproportionate number of manuscripts reporting on uses of artificial intelligence. This field has matured rapidly in the past several years. There was initial fascination with the algorithms themselves (machine learning, deep learning, convoluted neural networks) and the use of these algorithms to make predictions that often surpassed prevailing benchmarks. As the discipline has matured, individuals have called attention to aberrancies in the output of these algorithms. In particular, criticisms have been widely circulated that algorithmically developed models may have limited generalizability due to overfitting to the training data and may systematically perpetuate various forms of biases inherent in the training data, including race, gender, age, and health state or fitness level (Challen et al. BMJ Qual. Saf. 28:231–237, 2019; O'neil. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, Broadway Book, 2016). Given our interest in publishing the highest quality papers and the growing volume of submissions using AI algorithms, we offer a list of criteria that authors should consider before submitting papers to npj Digital Medicine.
Original languageEnglish
Article number134
Journalnpj Digital Medicine
Volume3
Early online date16 Oct 2020
DOIs
Publication statusE-pub ahead of print - 16 Oct 2020

Keywords

  • editorial
  • best practices
  • AI algorithms
  • healthcare
  • artificial intelligence (AI)
  • big data
  • digital medicine

Fingerprint

Dive into the research topics of 'Best practices for authors of healthcare-related artificial intelligence manuscripts'. Together they form a unique fingerprint.

Cite this