CLEF 2017 dynamic search lab overview and evaluation

Evangelos Kanoulas, Leif Azzopardi

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)
54 Downloads (Pure)

Abstract

In this paper we provide an overview of the first edition of the CLEF Dynamic Search Lab. The CLEF Dynamic Search lab ran in the form of a workshop with the goal of approaching one key question: how can we evaluate dynamic search algorithms? Unlike static search algorithms, which essentially consider user request's independently, and which do not adapt the ranking w.r.t the user's sequence of interactions, dynamic search algorithms try to infer the user's intentions from their interactions and then adapt the ranking accordingly. Personalized session search, contextual search, and dialog systems often adopt such algorithms. This lab provides an opportunity for researchers to discuss the challenges faced when trying to measure and evaluate the performance of dynamic search algorithms, given the context of available corpora, simulations methods, and current evaluation metrics. To seed the discussion, a pilot task was run with the goal of producing search agents that could simulate the process of a user, interacting with a search system over the course of a search session. Herein, we describe the overall objectives of the CLEF 2017 Dynamic Search Lab, the resources created for the pilot task, the evaluation methodology adopted, and some preliminary evaluation results of the Pilot task.

Original languageEnglish
Pages (from-to)1-9
Number of pages9
JournalCEUR Workshop Proceedings
Volume1866
Publication statusPublished - 11 Sept 2017

Keywords

  • dynamic search algorithms
  • HCI
  • information retrieval
  • search

Fingerprint

Dive into the research topics of 'CLEF 2017 dynamic search lab overview and evaluation'. Together they form a unique fingerprint.

Cite this