Expert elicitation: using the classical model to validate experts' judgments

Abigail R. Colson, Roger M. Cooke

Research output: Contribution to journalArticle

12 Citations (Scopus)
15 Downloads (Pure)

Abstract

The inclusion of expert judgments along with other forms of data in science, engineering, and decision making is inevitable. Expert elicitation refers to formal procedures for obtaining and combining expert judgments. Expert elicitation is required when existing data and models cannot provide needed information. This makes validating expert judgements a challenge because they are used when other data do not exist, and thus measuring their accuracy is difficult. This article examines the Classical Model of structured expert judgment, which is an elicitation method that includes validation of the experts' assessments against empirical data. In the Classical Model, experts assess both the unknown target questions and a set of calibration questions, which are items from the experts’ field that have observed true values. The Classical Model scores experts on their performance in assessing the calibration questions and then produces performance-weighted combinations of the experts. From 2006 through March 2015, the Classical Model has been used in thirty-three unique applications. Less than one-third of the individual experts in these studies were statistically accurate, highlighting the need for validation. Overall, the performance-based combination of experts produced in the Classical Model is more statistically accurate and more informative than an equal weighting of experts.
Original languageEnglish
Pages (from-to)113–132
Number of pages20
JournalReview of Environmental Economics and Policy
Volume12
Issue number1
Early online date2 Feb 2018
DOIs
Publication statusPublished - 28 Feb 2018

    Fingerprint

Keywords

  • expert judgments
  • calibration
  • out-of-sample validation
  • classical model

Cite this