TU Delft expert judgment data base

Roger M Cooke, Louis LHJ Goossens

Research output: Contribution to journalArticle

165 Citations (Scopus)

Abstract

We review the applications of structured expert judgment uncertainty quantification using the “classical model” developed at the Delft University of Technology over the last 17 years [Cooke RM. Experts in uncertainty. Oxford: Oxford University Press; 1991; Expert judgment study on atmospheric dispersion and deposition. Report Faculty of Technical Mathematics and Informatics No.01-81, Delft University of Technology; 1991]. These involve 45 expert panels, performed under contract with problem owners who reviewed and approved the results. With a few exceptions, all these applications involved the use of seed variables; that is, variables from the experts’ area of expertise for which the true values are available post hoc. Seed variables are used to (1) measure expert performance, (2) enable performance-based weighted combination of experts’ distributions, and (3) evaluate and hopefully validate the resulting combination or “decision maker”. This article reviews the classical model for structured expert judgment and the performance measures, reviews applications, comparing performance-based decision makers with “equal weight” decision makers, and collects some lessons learned.
LanguageEnglish
Pages657-674
Number of pages18
JournalReliability Engineering and System Safety
Volume93
Issue number5
DOIs
Publication statusPublished - May 2008

Fingerprint

Seed
Data base
Expert judgment
Uncertainty
Decision maker

Keywords

  • TU Delft
  • expert judgment
  • data base
  • information
  • subjective probability
  • rational consensus
  • calibration

Cite this

Cooke, Roger M ; Goossens, Louis LHJ. / TU Delft expert judgment data base. In: Reliability Engineering and System Safety. 2008 ; Vol. 93, No. 5. pp. 657-674.
@article{00a1127cef5946a8859baa915a7e38a0,
title = "TU Delft expert judgment data base",
abstract = "We review the applications of structured expert judgment uncertainty quantification using the “classical model” developed at the Delft University of Technology over the last 17 years [Cooke RM. Experts in uncertainty. Oxford: Oxford University Press; 1991; Expert judgment study on atmospheric dispersion and deposition. Report Faculty of Technical Mathematics and Informatics No.01-81, Delft University of Technology; 1991]. These involve 45 expert panels, performed under contract with problem owners who reviewed and approved the results. With a few exceptions, all these applications involved the use of seed variables; that is, variables from the experts’ area of expertise for which the true values are available post hoc. Seed variables are used to (1) measure expert performance, (2) enable performance-based weighted combination of experts’ distributions, and (3) evaluate and hopefully validate the resulting combination or “decision maker”. This article reviews the classical model for structured expert judgment and the performance measures, reviews applications, comparing performance-based decision makers with “equal weight” decision makers, and collects some lessons learned.",
keywords = "TU Delft, expert judgment , data base, information, subjective probability, rational consensus, calibration",
author = "Cooke, {Roger M} and Goossens, {Louis LHJ}",
year = "2008",
month = "5",
doi = "10.1016/j.ress.2007.03.005",
language = "English",
volume = "93",
pages = "657--674",
journal = "Reliability Engineering and System Safety",
issn = "0951-8320",
number = "5",

}

TU Delft expert judgment data base. / Cooke, Roger M; Goossens, Louis LHJ.

In: Reliability Engineering and System Safety, Vol. 93, No. 5, 05.2008, p. 657-674.

Research output: Contribution to journalArticle

TY - JOUR

T1 - TU Delft expert judgment data base

AU - Cooke, Roger M

AU - Goossens, Louis LHJ

PY - 2008/5

Y1 - 2008/5

N2 - We review the applications of structured expert judgment uncertainty quantification using the “classical model” developed at the Delft University of Technology over the last 17 years [Cooke RM. Experts in uncertainty. Oxford: Oxford University Press; 1991; Expert judgment study on atmospheric dispersion and deposition. Report Faculty of Technical Mathematics and Informatics No.01-81, Delft University of Technology; 1991]. These involve 45 expert panels, performed under contract with problem owners who reviewed and approved the results. With a few exceptions, all these applications involved the use of seed variables; that is, variables from the experts’ area of expertise for which the true values are available post hoc. Seed variables are used to (1) measure expert performance, (2) enable performance-based weighted combination of experts’ distributions, and (3) evaluate and hopefully validate the resulting combination or “decision maker”. This article reviews the classical model for structured expert judgment and the performance measures, reviews applications, comparing performance-based decision makers with “equal weight” decision makers, and collects some lessons learned.

AB - We review the applications of structured expert judgment uncertainty quantification using the “classical model” developed at the Delft University of Technology over the last 17 years [Cooke RM. Experts in uncertainty. Oxford: Oxford University Press; 1991; Expert judgment study on atmospheric dispersion and deposition. Report Faculty of Technical Mathematics and Informatics No.01-81, Delft University of Technology; 1991]. These involve 45 expert panels, performed under contract with problem owners who reviewed and approved the results. With a few exceptions, all these applications involved the use of seed variables; that is, variables from the experts’ area of expertise for which the true values are available post hoc. Seed variables are used to (1) measure expert performance, (2) enable performance-based weighted combination of experts’ distributions, and (3) evaluate and hopefully validate the resulting combination or “decision maker”. This article reviews the classical model for structured expert judgment and the performance measures, reviews applications, comparing performance-based decision makers with “equal weight” decision makers, and collects some lessons learned.

KW - TU Delft

KW - expert judgment

KW - data base

KW - information

KW - subjective probability

KW - rational consensus

KW - calibration

UR - http://dx.doi.org/10.1016/j.ress.2007.03.005

U2 - 10.1016/j.ress.2007.03.005

DO - 10.1016/j.ress.2007.03.005

M3 - Article

VL - 93

SP - 657

EP - 674

JO - Reliability Engineering and System Safety

T2 - Reliability Engineering and System Safety

JF - Reliability Engineering and System Safety

SN - 0951-8320

IS - 5

ER -