Modelling epistemic uncertainty in IR evaluation

M. Yakici, M. Baillie, I. Ruthven, F. Crestani

Research output: Contribution to conferencePaper

Abstract

Modern information retrieval (IR) test collections violate the completeness assumption of the Cranfield paradigm. In order to maximise the available resources, only a sample of documents (i.e. the pool) are judged for relevance by a human assessor(s). The subsequent evaluation protocol does not make any distinctions between assessed or unassessed documents, as documents that are not in the pool are assumed to be not relevant for the topic. This is beneficial from a practical point of view, as the relative performance can be compared with confidence if the experimental conditions are fair for all systems. However, given the incompleteness of relevance assessments, two forms of uncertainty emerge during evaluation. The first is Aleatory uncertainty, which refers to variation in system performance across the topic set, which is often addressed through the use of statistical significance tests. The second form of uncertainty is Epistemic, which refers to the amount of knowledge (or ignorance) we have about the estimate of a system's performance.
Original languageEnglish
Number of pages2
Publication statusUnpublished - Jul 2007
EventProceedings of the 30th Annual International ACM SIGIR Conference on Research and Development on Information Retrieval (SIGIR 07) - Amsterdam, Netherlands
Duration: 23 Jul 200727 Jul 2007

Conference

ConferenceProceedings of the 30th Annual International ACM SIGIR Conference on Research and Development on Information Retrieval (SIGIR 07)
CityAmsterdam, Netherlands
Period23/07/0727/07/07

Fingerprint

Information retrieval
Statistical tests
Uncertainty

Keywords

  • systems
  • software performance
  • evaluation
  • performance evaluation
  • metrics
  • uncertainty
  • information retrieval

Cite this

Yakici, M., Baillie, M., Ruthven, I., & Crestani, F. (2007). Modelling epistemic uncertainty in IR evaluation. Paper presented at Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development on Information Retrieval (SIGIR 07), Amsterdam, Netherlands, .
Yakici, M. ; Baillie, M. ; Ruthven, I. ; Crestani, F. / Modelling epistemic uncertainty in IR evaluation. Paper presented at Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development on Information Retrieval (SIGIR 07), Amsterdam, Netherlands, .2 p.
@conference{34da0fac00bf4603bc4f3eed33c03a98,
title = "Modelling epistemic uncertainty in IR evaluation",
abstract = "Modern information retrieval (IR) test collections violate the completeness assumption of the Cranfield paradigm. In order to maximise the available resources, only a sample of documents (i.e. the pool) are judged for relevance by a human assessor(s). The subsequent evaluation protocol does not make any distinctions between assessed or unassessed documents, as documents that are not in the pool are assumed to be not relevant for the topic. This is beneficial from a practical point of view, as the relative performance can be compared with confidence if the experimental conditions are fair for all systems. However, given the incompleteness of relevance assessments, two forms of uncertainty emerge during evaluation. The first is Aleatory uncertainty, which refers to variation in system performance across the topic set, which is often addressed through the use of statistical significance tests. The second form of uncertainty is Epistemic, which refers to the amount of knowledge (or ignorance) we have about the estimate of a system's performance.",
keywords = "systems, software performance, evaluation, performance evaluation, metrics, uncertainty, information retrieval",
author = "M. Yakici and M. Baillie and I. Ruthven and F. Crestani",
year = "2007",
month = "7",
language = "English",
note = "Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development on Information Retrieval (SIGIR 07) ; Conference date: 23-07-2007 Through 27-07-2007",

}

Yakici, M, Baillie, M, Ruthven, I & Crestani, F 2007, 'Modelling epistemic uncertainty in IR evaluation' Paper presented at Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development on Information Retrieval (SIGIR 07), Amsterdam, Netherlands, 23/07/07 - 27/07/07, .

Modelling epistemic uncertainty in IR evaluation. / Yakici, M.; Baillie, M.; Ruthven, I.; Crestani, F.

2007. Paper presented at Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development on Information Retrieval (SIGIR 07), Amsterdam, Netherlands, .

Research output: Contribution to conferencePaper

TY - CONF

T1 - Modelling epistemic uncertainty in IR evaluation

AU - Yakici, M.

AU - Baillie, M.

AU - Ruthven, I.

AU - Crestani, F.

PY - 2007/7

Y1 - 2007/7

N2 - Modern information retrieval (IR) test collections violate the completeness assumption of the Cranfield paradigm. In order to maximise the available resources, only a sample of documents (i.e. the pool) are judged for relevance by a human assessor(s). The subsequent evaluation protocol does not make any distinctions between assessed or unassessed documents, as documents that are not in the pool are assumed to be not relevant for the topic. This is beneficial from a practical point of view, as the relative performance can be compared with confidence if the experimental conditions are fair for all systems. However, given the incompleteness of relevance assessments, two forms of uncertainty emerge during evaluation. The first is Aleatory uncertainty, which refers to variation in system performance across the topic set, which is often addressed through the use of statistical significance tests. The second form of uncertainty is Epistemic, which refers to the amount of knowledge (or ignorance) we have about the estimate of a system's performance.

AB - Modern information retrieval (IR) test collections violate the completeness assumption of the Cranfield paradigm. In order to maximise the available resources, only a sample of documents (i.e. the pool) are judged for relevance by a human assessor(s). The subsequent evaluation protocol does not make any distinctions between assessed or unassessed documents, as documents that are not in the pool are assumed to be not relevant for the topic. This is beneficial from a practical point of view, as the relative performance can be compared with confidence if the experimental conditions are fair for all systems. However, given the incompleteness of relevance assessments, two forms of uncertainty emerge during evaluation. The first is Aleatory uncertainty, which refers to variation in system performance across the topic set, which is often addressed through the use of statistical significance tests. The second form of uncertainty is Epistemic, which refers to the amount of knowledge (or ignorance) we have about the estimate of a system's performance.

KW - systems

KW - software performance

KW - evaluation

KW - performance evaluation

KW - metrics

KW - uncertainty

KW - information retrieval

UR - http://www.sigir2007.org/

UR - http://www.cis.strath.ac.uk/research/publications/papers/strath_cis_publication_1978.pdf

M3 - Paper

ER -

Yakici M, Baillie M, Ruthven I, Crestani F. Modelling epistemic uncertainty in IR evaluation. 2007. Paper presented at Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development on Information Retrieval (SIGIR 07), Amsterdam, Netherlands, .