An evaluation of resource description quality measures

M. Baillie, L. Azzopardi, F. Crestani

Research output: Chapter in Book/Report/Conference proceedingChapter

2 Citations (Scopus)
55 Downloads (Pure)

Abstract

An open problem for Distributed Information Retrieval is how to represent large document repositories (known as resources) efficiently. To facilitate resource selection, estimated descriptions of each resource are required, especially when faced with non-cooperative distributed environments[1]. Accurate and efficient Resource description estimation is required as this can have an affect on resource selection, and as a consequence retrieval quality. Query-Based Sampling (QBS) has been proposed as a novel solution for resource estimation[2], with proceeding techniques developed therafter[3]. However, the challenge to determine if one QBS technique is better at generating resource description than another is still an unresolved issue. The initial metrics tested and deployed for measuring resource description quality were the Collection Term Frequency ratio (CTF) and Spearman Rank Correlation Coefficient (SRCC)[2]. The former provides an indication of the percentage of terms seen, whilst the later measures the term ranking order, although neither consider the term frequency, which is important for resource selection. We re-examine this problem and consider measuring the quality of a resource description in context to resource selection, where an estimate of the probability of a term given the resource is typically required. We believe a natural measure for comparing the estimated resource against the actual resource is the Kullback-Leibler Divergence (KL) measure. KL addresses the concerns put forward previously, by not over-representing low frequency terms, and also considering term order[2]. In this paper, we re-assess the two previous measures alongside KL. Our preliminary investigation revealed that the former metrics display contradictory results. Whilst, KL suggested a different QBS technique than that prescribed in [2], would provide better estimates. This is a significant result, because it now remains unclear as to which technique will consistently provide better resource descriptions. The remainder of this paper details the three measures, the experimental analysis of our preliminary study and outlines our points of concern along with further research directions.
Original languageEnglish
Title of host publicationProceedings of the 2006 ACM symposium on Applied computing
Pages1110-1111
Number of pages1
Publication statusPublished - 2006

Keywords

  • repositories
  • resource selection
  • cataloguing
  • metadata
  • information retrieval
  • query-based sampling

Fingerprint Dive into the research topics of 'An evaluation of resource description quality measures'. Together they form a unique fingerprint.

  • Cite this

    Baillie, M., Azzopardi, L., & Crestani, F. (2006). An evaluation of resource description quality measures. In Proceedings of the 2006 ACM symposium on Applied computing (pp. 1110-1111)