Expert elicitation: using the classical model to validate experts' judgments

Abigail R. Colson, Roger M. Cooke

Research output: Contribution to journalArticle

7 Citations (Scopus)

Abstract

The inclusion of expert judgments along with other forms of data in science, engineering, and decision making is inevitable. Expert elicitation refers to formal procedures for obtaining and combining expert judgments. Expert elicitation is required when existing data and models cannot provide needed information. This makes validating expert judgements a challenge because they are used when other data do not exist, and thus measuring their accuracy is difficult. This article examines the Classical Model of structured expert judgment, which is an elicitation method that includes validation of the experts' assessments against empirical data. In the Classical Model, experts assess both the unknown target questions and a set of calibration questions, which are items from the experts’ field that have observed true values. The Classical Model scores experts on their performance in assessing the calibration questions and then produces performance-weighted combinations of the experts. From 2006 through March 2015, the Classical Model has been used in thirty-three unique applications. Less than one-third of the individual experts in these studies were statistically accurate, highlighting the need for validation. Overall, the performance-based combination of experts produced in the Classical Model is more statistically accurate and more informative than an equal weighting of experts.
LanguageEnglish
Pages113–132
Number of pages20
JournalReview of Environmental Economics and Policy
Volume12
Issue number1
Early online date2 Feb 2018
DOIs
Publication statusPublished - 28 Feb 2018

Fingerprint

calibration
Expert judgment
Expert elicitation
decision making
engineering
Calibration
need
science
method
measuring
Weighting
Empirical data
Decision making
Inclusion

Keywords

  • expert judgments
  • calibration
  • out-of-sample validation
  • classical model

Cite this

@article{62679c63dbf14179ad65d6256a74916e,
title = "Expert elicitation: using the classical model to validate experts' judgments",
abstract = "The inclusion of expert judgments along with other forms of data in science, engineering, and decision making is inevitable. Expert elicitation refers to formal procedures for obtaining and combining expert judgments. Expert elicitation is required when existing data and models cannot provide needed information. This makes validating expert judgements a challenge because they are used when other data do not exist, and thus measuring their accuracy is difficult. This article examines the Classical Model of structured expert judgment, which is an elicitation method that includes validation of the experts' assessments against empirical data. In the Classical Model, experts assess both the unknown target questions and a set of calibration questions, which are items from the experts’ field that have observed true values. The Classical Model scores experts on their performance in assessing the calibration questions and then produces performance-weighted combinations of the experts. From 2006 through March 2015, the Classical Model has been used in thirty-three unique applications. Less than one-third of the individual experts in these studies were statistically accurate, highlighting the need for validation. Overall, the performance-based combination of experts produced in the Classical Model is more statistically accurate and more informative than an equal weighting of experts.",
keywords = "expert judgments, calibration, out-of-sample validation, classical model",
author = "Colson, {Abigail R.} and Cooke, {Roger M.}",
note = "This is a pre-copyedited, author-produced version of an article accepted for publication in Review of Environmental Economics and Policy, following peer review. The version of record Colson, A. R., & Cooke, R. M. (2018). Expert elicitation: using the classical model to validate experts' judgments. Review of Environmental Economics and Policy, 12(1), 113–132. is available online at: https://doi.org/10.1093/reep/rex022.",
year = "2018",
month = "2",
day = "28",
doi = "10.1093/reep/rex022",
language = "English",
volume = "12",
pages = "113–132",
journal = "Review of Environmental Economics and Policy",
issn = "1750-6816",
number = "1",

}

Expert elicitation : using the classical model to validate experts' judgments. / Colson, Abigail R.; Cooke, Roger M.

In: Review of Environmental Economics and Policy, Vol. 12, No. 1, 28.02.2018, p. 113–132.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Expert elicitation

T2 - Review of Environmental Economics and Policy

AU - Colson, Abigail R.

AU - Cooke, Roger M.

N1 - This is a pre-copyedited, author-produced version of an article accepted for publication in Review of Environmental Economics and Policy, following peer review. The version of record Colson, A. R., & Cooke, R. M. (2018). Expert elicitation: using the classical model to validate experts' judgments. Review of Environmental Economics and Policy, 12(1), 113–132. is available online at: https://doi.org/10.1093/reep/rex022.

PY - 2018/2/28

Y1 - 2018/2/28

N2 - The inclusion of expert judgments along with other forms of data in science, engineering, and decision making is inevitable. Expert elicitation refers to formal procedures for obtaining and combining expert judgments. Expert elicitation is required when existing data and models cannot provide needed information. This makes validating expert judgements a challenge because they are used when other data do not exist, and thus measuring their accuracy is difficult. This article examines the Classical Model of structured expert judgment, which is an elicitation method that includes validation of the experts' assessments against empirical data. In the Classical Model, experts assess both the unknown target questions and a set of calibration questions, which are items from the experts’ field that have observed true values. The Classical Model scores experts on their performance in assessing the calibration questions and then produces performance-weighted combinations of the experts. From 2006 through March 2015, the Classical Model has been used in thirty-three unique applications. Less than one-third of the individual experts in these studies were statistically accurate, highlighting the need for validation. Overall, the performance-based combination of experts produced in the Classical Model is more statistically accurate and more informative than an equal weighting of experts.

AB - The inclusion of expert judgments along with other forms of data in science, engineering, and decision making is inevitable. Expert elicitation refers to formal procedures for obtaining and combining expert judgments. Expert elicitation is required when existing data and models cannot provide needed information. This makes validating expert judgements a challenge because they are used when other data do not exist, and thus measuring their accuracy is difficult. This article examines the Classical Model of structured expert judgment, which is an elicitation method that includes validation of the experts' assessments against empirical data. In the Classical Model, experts assess both the unknown target questions and a set of calibration questions, which are items from the experts’ field that have observed true values. The Classical Model scores experts on their performance in assessing the calibration questions and then produces performance-weighted combinations of the experts. From 2006 through March 2015, the Classical Model has been used in thirty-three unique applications. Less than one-third of the individual experts in these studies were statistically accurate, highlighting the need for validation. Overall, the performance-based combination of experts produced in the Classical Model is more statistically accurate and more informative than an equal weighting of experts.

KW - expert judgments

KW - calibration

KW - out-of-sample validation

KW - classical model

UR - https://academic.oup.com/reep

U2 - 10.1093/reep/rex022

DO - 10.1093/reep/rex022

M3 - Article

VL - 12

SP - 113

EP - 132

JO - Review of Environmental Economics and Policy

JF - Review of Environmental Economics and Policy

SN - 1750-6816

IS - 1

ER -