Introduction to the special issue on evaluating interactive information retrieval systems

P. Borlund, I. Ruthven

Research output: Contribution to journalArticle

4 Citations (Scopus)

Abstract

Evaluation has always been a strong element of Information Retrieval (IR) research, much of our focus being on how we evaluate IR algorithms. As a research field we have benefited greatly from initiatives such as Cranfield, TREC, CLEF and INEX that have added to our knowledge of how to create test collections, the reliability of system-based evaluation criteria and our understanding of how to interpret the results of an algorithmic evaluation. In contrast, evaluations whose main focus is the user experience of searching using IR systems have not yet reached the same level of maturity. Such evaluations are complex to create and assess due to the increased number of variables to incorporate within the study, the lack of standard tools available (for example, test collections) and the difficulty of selecting appropriate evaluation criteria for study.
LanguageEnglish
Pages1-3
Number of pages2
JournalInformation Processing and Management
Volume44
Issue number1
Publication statusPublished - Jan 2008

Fingerprint

Information retrieval systems
Information retrieval
information retrieval
evaluation
maturity
field research
Evaluation
lack
Test collections
Evaluation criteria
experience

Keywords

  • information retrieval
  • algorithms
  • evaluation
  • computer systems

Cite this

@article{3be2aa7a7daa4d75938d7d9d974770f8,
title = "Introduction to the special issue on evaluating interactive information retrieval systems",
abstract = "Evaluation has always been a strong element of Information Retrieval (IR) research, much of our focus being on how we evaluate IR algorithms. As a research field we have benefited greatly from initiatives such as Cranfield, TREC, CLEF and INEX that have added to our knowledge of how to create test collections, the reliability of system-based evaluation criteria and our understanding of how to interpret the results of an algorithmic evaluation. In contrast, evaluations whose main focus is the user experience of searching using IR systems have not yet reached the same level of maturity. Such evaluations are complex to create and assess due to the increased number of variables to incorporate within the study, the lack of standard tools available (for example, test collections) and the difficulty of selecting appropriate evaluation criteria for study.",
keywords = "information retrieval, algorithms, evaluation, computer systems",
author = "P. Borlund and I. Ruthven",
year = "2008",
month = "1",
language = "English",
volume = "44",
pages = "1--3",
journal = "Information Processing and Management",
issn = "0306-4573",
number = "1",

}

Introduction to the special issue on evaluating interactive information retrieval systems. / Borlund, P.; Ruthven, I.

In: Information Processing and Management, Vol. 44, No. 1, 01.2008, p. 1-3.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Introduction to the special issue on evaluating interactive information retrieval systems

AU - Borlund, P.

AU - Ruthven, I.

PY - 2008/1

Y1 - 2008/1

N2 - Evaluation has always been a strong element of Information Retrieval (IR) research, much of our focus being on how we evaluate IR algorithms. As a research field we have benefited greatly from initiatives such as Cranfield, TREC, CLEF and INEX that have added to our knowledge of how to create test collections, the reliability of system-based evaluation criteria and our understanding of how to interpret the results of an algorithmic evaluation. In contrast, evaluations whose main focus is the user experience of searching using IR systems have not yet reached the same level of maturity. Such evaluations are complex to create and assess due to the increased number of variables to incorporate within the study, the lack of standard tools available (for example, test collections) and the difficulty of selecting appropriate evaluation criteria for study.

AB - Evaluation has always been a strong element of Information Retrieval (IR) research, much of our focus being on how we evaluate IR algorithms. As a research field we have benefited greatly from initiatives such as Cranfield, TREC, CLEF and INEX that have added to our knowledge of how to create test collections, the reliability of system-based evaluation criteria and our understanding of how to interpret the results of an algorithmic evaluation. In contrast, evaluations whose main focus is the user experience of searching using IR systems have not yet reached the same level of maturity. Such evaluations are complex to create and assess due to the increased number of variables to incorporate within the study, the lack of standard tools available (for example, test collections) and the difficulty of selecting appropriate evaluation criteria for study.

KW - information retrieval

KW - algorithms

KW - evaluation

KW - computer systems

UR - http://www.cis.strath.ac.uk/~ir/ipm/

UR - http://dx.doi.org/10.1016/j.ipm.2007.03.006

M3 - Article

VL - 44

SP - 1

EP - 3

JO - Information Processing and Management

T2 - Information Processing and Management

JF - Information Processing and Management

SN - 0306-4573

IS - 1

ER -