CLEF 2017 technologically assisted reviews in empirical medicine overview

Evangelos Kanoulas, Dan Li, Leif Azzopardi, Rene Spijker

Research output: Contribution to journalArticle

14 Citations (Scopus)

Abstract

Systematic reviews are a widely used method to provide an overview over the current scientific consensus, by bringing together multiple studies in a reliable, transparent way. The large and growing number of published studies, and their increasing rate of publication, makes the task of identifying all relevant studies in an unbiased way both complex and time consuming to the extent that jeopardizes the validity of their findings and the ability to inform policy and practice in a timely manner. The CLEF 2017 e-Health Lab Task 2 focuses on the efficient and effective ranking of studies during the abstract and title screening phase of conducting Diagnostic Test Accuracy systematic reviews. We constructed a benchmark collection of fifty such reviews and the corresponding relevant and irrelevant articles found by the original Boolean query. Fourteen teams participated in the task, submitting 68 automatic and semi-automatic runs, using information retrieval and machine learning algorithms over a variety of text representations, in a batch and iterative manner. This paper reports both the methodology used to construct the benchmark collection, and the results of the evaluation.

LanguageEnglish
Pages1-29
Number of pages29
JournalCEUR Workshop Proceedings
Volume1866
Publication statusPublished - 11 Sep 2017

Fingerprint

Medicine
Information retrieval
Learning algorithms
Learning systems
Screening
Health

Keywords

  • Active learning
  • Evaluation
  • Information retrieval
  • Systematic reviews
  • TAR
  • Text classification

Cite this

Kanoulas, Evangelos ; Li, Dan ; Azzopardi, Leif ; Spijker, Rene. / CLEF 2017 technologically assisted reviews in empirical medicine overview. In: CEUR Workshop Proceedings. 2017 ; Vol. 1866. pp. 1-29.
@article{fccd2bec4a0b4607b7267d260cc83dd9,
title = "CLEF 2017 technologically assisted reviews in empirical medicine overview",
abstract = "Systematic reviews are a widely used method to provide an overview over the current scientific consensus, by bringing together multiple studies in a reliable, transparent way. The large and growing number of published studies, and their increasing rate of publication, makes the task of identifying all relevant studies in an unbiased way both complex and time consuming to the extent that jeopardizes the validity of their findings and the ability to inform policy and practice in a timely manner. The CLEF 2017 e-Health Lab Task 2 focuses on the efficient and effective ranking of studies during the abstract and title screening phase of conducting Diagnostic Test Accuracy systematic reviews. We constructed a benchmark collection of fifty such reviews and the corresponding relevant and irrelevant articles found by the original Boolean query. Fourteen teams participated in the task, submitting 68 automatic and semi-automatic runs, using information retrieval and machine learning algorithms over a variety of text representations, in a batch and iterative manner. This paper reports both the methodology used to construct the benchmark collection, and the results of the evaluation.",
keywords = "Active learning, Evaluation, Information retrieval, Systematic reviews, TAR, Text classification",
author = "Evangelos Kanoulas and Dan Li and Leif Azzopardi and Rene Spijker",
year = "2017",
month = "9",
day = "11",
language = "English",
volume = "1866",
pages = "1--29",
journal = "CEUR Workshop Proceedings",
issn = "1613-0073",

}

CLEF 2017 technologically assisted reviews in empirical medicine overview. / Kanoulas, Evangelos; Li, Dan; Azzopardi, Leif; Spijker, Rene.

In: CEUR Workshop Proceedings, Vol. 1866, 11.09.2017, p. 1-29.

Research output: Contribution to journalArticle

TY - JOUR

T1 - CLEF 2017 technologically assisted reviews in empirical medicine overview

AU - Kanoulas, Evangelos

AU - Li, Dan

AU - Azzopardi, Leif

AU - Spijker, Rene

PY - 2017/9/11

Y1 - 2017/9/11

N2 - Systematic reviews are a widely used method to provide an overview over the current scientific consensus, by bringing together multiple studies in a reliable, transparent way. The large and growing number of published studies, and their increasing rate of publication, makes the task of identifying all relevant studies in an unbiased way both complex and time consuming to the extent that jeopardizes the validity of their findings and the ability to inform policy and practice in a timely manner. The CLEF 2017 e-Health Lab Task 2 focuses on the efficient and effective ranking of studies during the abstract and title screening phase of conducting Diagnostic Test Accuracy systematic reviews. We constructed a benchmark collection of fifty such reviews and the corresponding relevant and irrelevant articles found by the original Boolean query. Fourteen teams participated in the task, submitting 68 automatic and semi-automatic runs, using information retrieval and machine learning algorithms over a variety of text representations, in a batch and iterative manner. This paper reports both the methodology used to construct the benchmark collection, and the results of the evaluation.

AB - Systematic reviews are a widely used method to provide an overview over the current scientific consensus, by bringing together multiple studies in a reliable, transparent way. The large and growing number of published studies, and their increasing rate of publication, makes the task of identifying all relevant studies in an unbiased way both complex and time consuming to the extent that jeopardizes the validity of their findings and the ability to inform policy and practice in a timely manner. The CLEF 2017 e-Health Lab Task 2 focuses on the efficient and effective ranking of studies during the abstract and title screening phase of conducting Diagnostic Test Accuracy systematic reviews. We constructed a benchmark collection of fifty such reviews and the corresponding relevant and irrelevant articles found by the original Boolean query. Fourteen teams participated in the task, submitting 68 automatic and semi-automatic runs, using information retrieval and machine learning algorithms over a variety of text representations, in a batch and iterative manner. This paper reports both the methodology used to construct the benchmark collection, and the results of the evaluation.

KW - Active learning

KW - Evaluation

KW - Information retrieval

KW - Systematic reviews

KW - TAR

KW - Text classification

UR - http://ceur-ws.org/Vol-1866/

UR - http://www.scopus.com/inward/record.url?scp=85034732447&partnerID=8YFLogxK

M3 - Article

VL - 1866

SP - 1

EP - 29

JO - CEUR Workshop Proceedings

T2 - CEUR Workshop Proceedings

JF - CEUR Workshop Proceedings

SN - 1613-0073

ER -