Abstract
Ranking a number of retrieval systems according to their retrieval effectiveness without relying on costly relevance judgments was first explored by Soboroff et al [6]. Over the years, a number of alternative approaches have been proposed. We perform a comprehensive analysis of system ranking estimation approaches on a wide variety of TREC test collections and topics sets. Our analysis reveals that the performance of such approaches is highly dependent upon the topic or topic subset, used for estimation. We hypothesize that the performance of system ranking estimation approaches can be improved by selecting the "right" subset of topics and show that using topic subsets improves the performance by 32% on average, with a maximum improvement of up to 70% in some cases.
Original language | English |
---|---|
Title of host publication | CIKM '09 Proceedings of the 18th ACM Conference on Information and Knowledge Management |
Place of Publication | New York, NY, USA |
Pages | 1859-1862 |
Number of pages | 4 |
DOIs | |
Publication status | Published - 2 Nov 2009 |
Externally published | Yes |
Keywords
- evaluation
- system ranking estimation