Evaluating the effort involved in relevance assessments for images

Martin Halvey, Robert Villa

Research output: Chapter in Book/Report/Conference proceedingChapter (peer-reviewed)

4 Citations (Scopus)

Abstract

How assessors and end users judge the relevance of images has been studied in information science and information retrieval for a considerable time. The criteria by which assessors' judge relevance has been intensively studied, and there has been a large amount of work which has investigated how relevance judgments for test collections can be more cheaply generated, such as through crowd sourcing. Relatively little work has investigated the process individual assessors go through to judge the relevance of an image. In this paper, we focus on the process by which relevance is judged for images, and in particular, the degree of effort a user must expend to judge relevance for different topics. Results suggest that topic difficulty and how semantic/visual a topic is impact user performance and perceived effort.
Original languageEnglish
Title of host publicationProceedings of the 37th international ACM SIGIR Conference on Research & Development in Information Retrieval (SIGIR '14)
Place of PublicationNew York
Pages887-890
Number of pages4
DOIs
Publication statusPublished - Mar 2014
Event37th International ACM SIGIR Conference on Research and Development in Information Retrieval - Gold Coast, Australia
Duration: 6 Jul 201411 Jul 2014

Conference

Conference37th International ACM SIGIR Conference on Research and Development in Information Retrieval
Abbreviated titleSIGIR '14
CountryAustralia
Period6/07/1411/07/14

    Fingerprint

Keywords

  • information retrieval
  • relevance judgements
  • image retrieval

Cite this

Halvey, M., & Villa, R. (2014). Evaluating the effort involved in relevance assessments for images. In Proceedings of the 37th international ACM SIGIR Conference on Research & Development in Information Retrieval (SIGIR '14) (pp. 887-890). New York. https://doi.org/10.1145/2600428.2609466