共 31 条
An analysis of evaluation campaigns in ad-hoc medical information retrieval: CLEF eHealth 2013 and 2014
被引:0
|作者:
Lorraine Goeuriot
Gareth J. F. Jones
Liadh Kelly
Johannes Leveling
Mihai Lupu
Joao Palotti
Guido Zuccon
机构:
[1] Université Grenoble Alpes,LIG
[2] Dublin City University,undefined
[3] Maynooth University,undefined
[4] TU Wien,undefined
[5] Queensland University of Technology,undefined
来源:
关键词:
eHealth;
Evaluation;
Benchmarking;
D O I:
暂无
中图分类号:
学科分类号:
摘要:
Since its inception in 2013, one of the key contributions of the CLEF eHealth evaluation campaign has been the organization of an ad-hoc information retrieval (IR) benchmarking task. This IR task evaluates systems intended to support laypeople searching for and understanding health information. Each year the task provides registered participants with standard IR test collections consisting of a document collection and topic set. Participants then return retrieval results obtained by their IR systems for each query, which are assessed using a pooling procedure. In this article we focus on CLEF eHealth 2013 and 2014s retrieval task, which saw topics created based on patients’ information needs associated with their medical discharge summaries. We overview the task and datasets created, and the results obtained by participating teams over these two years. We then provide a detailed comparative analysis of the results, and conduct an evaluation of the datasets in the light of these results. This twofold study of the evaluation campaign teaches us about technical aspects of medical IR, such as the effectiveness of query expansion; the quality and characteristics of CLEF eHealth IR datasets, such as their reliability; and how to run an IR evaluation campaign in the medical domain.
引用
收藏
页码:507 / 540
页数:33
相关论文
相似文献