User models, metrics and measures of search: a tutorial on the C/W/L evaluation framework

Leif Azzopardi, Alistair Moffat, Paul Thomas, Guido Zuccon

Research output: Chapter in Book/Report/Conference proceedingConference contribution book

2 Citations (Scopus)
36 Downloads (Pure)

Abstract

Evaluation is central to Information Retrieval, and is how we compare the quality of systems. One important principle of evaluation is that the measured score should reflect the user's experience with the system. Hence, there should be direct connection between how users interact with the system and the characteristics of the metric. In this tutorial we introduce the C/W/L approach to user modeling and show how different user models lead to different metrics. We then describe the recent innovations and approaches to evaluation that it has facilitated. The tutorial is presented as a mix of on-line synchronous lecture, pre-recorded in-depth videos, and hands-on activities using theC/W/L toolkit for participants' own evaluation tasks. A followup consultation session is also provided, to allow extended questions and individual discussion with the four presenters.

Original languageEnglish
Title of host publicationCHIIR 2021 - Proceedings of the 2021 Conference on Human Information Interaction and Retrieval
Place of PublicationNew York, NY.
Pages347–348
Number of pages2
ISBN (Electronic)9781450380553
DOIs
Publication statusPublished - 14 Mar 2021

Publication series

NameCHIIR 2021 - Proceedings of the 2021 Conference on Human Information Interaction and Retrieval

Keywords

  • information retrieval
  • information search
  • user models
  • information interaction
  • C/W/L framework

Fingerprint

Dive into the research topics of 'User models, metrics and measures of search: a tutorial on the C/W/L evaluation framework'. Together they form a unique fingerprint.

Cite this