A probabilistic metric for the validation of computational models

Ksenija Dvurecenska, Steve Graham, Edoardo Patelli, Eann A. Patterson

Research output: Contribution to journalArticlepeer-review

12 Citations (Scopus)
9 Downloads (Pure)

Abstract

A new validation metric is proposed that combines the use of a threshold based on the uncertainty in the measurement data with a normalised relative error, and that is robust in the presence of large variations in the data. The outcome from the metric is the probability that a model's predictions are representative of the real world based on the specific conditions and confidence level pertaining to the experiment from which the measurements were acquired. Relative error metrics are traditionally designed for use with series of data values but orthogonal decomposition has been employed to reduce the dimensionality of data matrices to feature vectors so that the metric can be applied to fields of data. Three previously published case studies are employed to demonstrate the efficacy of this quantitative approach to the validation process in the discipline of structural analysis, for which historical data was available; however, the concept could be applied to a wide range of disciplines and sectors where modelling and simulation plays a pivotal role.
Original languageEnglish
Number of pages14
JournalRoyal Society Open Science
Volume5
Issue number11
DOIs
Publication statusPublished - 21 Nov 2018

Keywords

  • relative error
  • model validation
  • computational modelling
  • orthogonal decomposition

Fingerprint

Dive into the research topics of 'A probabilistic metric for the validation of computational models'. Together they form a unique fingerprint.

Cite this