Evaluating implicit feedback models using searcher simulations

R.W. White, I. Ruthven, J.M. Jose, C.J. van Rijsbergen

Research output: Contribution to journalArticle

64 Citations (Scopus)

Abstract

In this article we describe an evaluation of relevance feedback (RF) algorithms using searcher simulations. Since these algorithms select additional terms for query modification based on inferences made from searcher interaction, not on relevance information searchers explicitly provide (as in traditional RF), we refer to them as implicit feedback models. We introduce six different models that base their decisions on the interactions of searchers and use different approaches to rank query modification terms. The aim of this article is to determine which of these models should be used to assist searchers in the systems we develop. To evaluate these models we used searcher simulations that afforded us more control over the experimental conditions than experiments with human subjects and allowed complex interaction to be modeled without the need for costly human experimentation. The simulation-based evaluation methodology measures how well the models learn the distribution of terms across relevant documents (i.e., learn what information is relevant) and how well they improve search effectiveness (i.e., create effective search queries). Our findings show that an implicit feedback model based on Jeffrey's rule of conditioning outperformed other models under investigation.
LanguageEnglish
Pages325-361
Number of pages36
JournalACM Transactions on Information Systems
Volume23
Issue number3
DOIs
Publication statusPublished - 2005

Fingerprint

Feedback
Simulation
Implicit feedback
Interaction
Query
Experiments
Relevance feedback

Keywords

  • searching
  • implicit feedback
  • databases
  • relevance feedback
  • algorithms
  • query modification

Cite this

White, R.W. ; Ruthven, I. ; Jose, J.M. ; van Rijsbergen, C.J. / Evaluating implicit feedback models using searcher simulations. In: ACM Transactions on Information Systems. 2005 ; Vol. 23, No. 3. pp. 325-361.
@article{ec04761792904de69096ceb6b100fe7b,
title = "Evaluating implicit feedback models using searcher simulations",
abstract = "In this article we describe an evaluation of relevance feedback (RF) algorithms using searcher simulations. Since these algorithms select additional terms for query modification based on inferences made from searcher interaction, not on relevance information searchers explicitly provide (as in traditional RF), we refer to them as implicit feedback models. We introduce six different models that base their decisions on the interactions of searchers and use different approaches to rank query modification terms. The aim of this article is to determine which of these models should be used to assist searchers in the systems we develop. To evaluate these models we used searcher simulations that afforded us more control over the experimental conditions than experiments with human subjects and allowed complex interaction to be modeled without the need for costly human experimentation. The simulation-based evaluation methodology measures how well the models learn the distribution of terms across relevant documents (i.e., learn what information is relevant) and how well they improve search effectiveness (i.e., create effective search queries). Our findings show that an implicit feedback model based on Jeffrey's rule of conditioning outperformed other models under investigation.",
keywords = "searching, implicit feedback, databases, relevance feedback, algorithms, query modification",
author = "R.W. White and I. Ruthven and J.M. Jose and {van Rijsbergen}, C.J.",
year = "2005",
doi = "10.1145/1080343.1080347",
language = "English",
volume = "23",
pages = "325--361",
journal = "ACM Transactions on Information Systems",
issn = "1046-8188",
number = "3",

}

Evaluating implicit feedback models using searcher simulations. / White, R.W.; Ruthven, I.; Jose, J.M.; van Rijsbergen, C.J.

In: ACM Transactions on Information Systems, Vol. 23, No. 3, 2005, p. 325-361.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Evaluating implicit feedback models using searcher simulations

AU - White, R.W.

AU - Ruthven, I.

AU - Jose, J.M.

AU - van Rijsbergen, C.J.

PY - 2005

Y1 - 2005

N2 - In this article we describe an evaluation of relevance feedback (RF) algorithms using searcher simulations. Since these algorithms select additional terms for query modification based on inferences made from searcher interaction, not on relevance information searchers explicitly provide (as in traditional RF), we refer to them as implicit feedback models. We introduce six different models that base their decisions on the interactions of searchers and use different approaches to rank query modification terms. The aim of this article is to determine which of these models should be used to assist searchers in the systems we develop. To evaluate these models we used searcher simulations that afforded us more control over the experimental conditions than experiments with human subjects and allowed complex interaction to be modeled without the need for costly human experimentation. The simulation-based evaluation methodology measures how well the models learn the distribution of terms across relevant documents (i.e., learn what information is relevant) and how well they improve search effectiveness (i.e., create effective search queries). Our findings show that an implicit feedback model based on Jeffrey's rule of conditioning outperformed other models under investigation.

AB - In this article we describe an evaluation of relevance feedback (RF) algorithms using searcher simulations. Since these algorithms select additional terms for query modification based on inferences made from searcher interaction, not on relevance information searchers explicitly provide (as in traditional RF), we refer to them as implicit feedback models. We introduce six different models that base their decisions on the interactions of searchers and use different approaches to rank query modification terms. The aim of this article is to determine which of these models should be used to assist searchers in the systems we develop. To evaluate these models we used searcher simulations that afforded us more control over the experimental conditions than experiments with human subjects and allowed complex interaction to be modeled without the need for costly human experimentation. The simulation-based evaluation methodology measures how well the models learn the distribution of terms across relevant documents (i.e., learn what information is relevant) and how well they improve search effectiveness (i.e., create effective search queries). Our findings show that an implicit feedback model based on Jeffrey's rule of conditioning outperformed other models under investigation.

KW - searching

KW - implicit feedback

KW - databases

KW - relevance feedback

KW - algorithms

KW - query modification

UR - http://dx.doi.org/10.1145/1080343.1080347

UR - http://www.acm.org/pubs/tois/

UR - http://research.microsoft.com/~ryenw/papers/WhiteTOIS2005.pdf

U2 - 10.1145/1080343.1080347

DO - 10.1145/1080343.1080347

M3 - Article

VL - 23

SP - 325

EP - 361

JO - ACM Transactions on Information Systems

T2 - ACM Transactions on Information Systems

JF - ACM Transactions on Information Systems

SN - 1046-8188

IS - 3

ER -