A Study on Performance Volatility in Information Retrieval

被引:0
|
作者
Hosseini, Mehdi [1 ]
机构
[1] UCL, Dept Comp Sci, London WC1E 6BT, England
关键词
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
A common practice in comparative evaluation of information retrieval (IR) systems is to create a test collection comprising a set of topics (queries), a document corpus, and relevance judgments, and to monitor the performance of retrieval systems over such a collection. A typical evaluation of a system involves computing a performance metric, e.g. ' Average Precision (AP), for each topic and then using the average performance metric, e.g., Mean Average Precision (MAP) to express the overall system performance. However, averages do not capture all the important aspects of system performance, and, used alone, may not thoroughly express system effectiveness. For example, the average can mask large variations in individual topic effectiveness. The author's hypothesis is that, in addition to average performance, attention needs to be paid to how a system performance varies across topics. We refer to this performance variation as Volatility. The main purpose of the thesis is to introduce the concept of performance volatility and apply it to information retrieval. There are several ways in which the volatility might be defined. One obvious definition is to use the standard deviation (SD) of the AP values from their MAP. An alternative definition might be compute the expected performance using a subset of queries, and then measure the deviation of held-out queries from this prediction. Another definition could be based on "interquartile range" [1]. Our initial investigation has used SD as a measure of volatility. Using SD to measure volatility has the benefit that it is a well-unders tood and well-studied quantity. However our preliminary experiments, which calculated a straightforward SD of per-topic performance scores, highlighted a problem. Typically, scores are bounded between [0,I). As a result, we observed that systems with low MAP exhibited lower volatility. This bias can be eliminated by applying a score standardization [3] or logit transformation to the AP values, in which case, the range of values is now (-infinity,+infinity). One application of volatility is in the evaluation of systems effectiveness. Following standard practices in experiment analysis, it is beneficial to consider both the mean and the volatility of performance (e.g. AP) across topics. Of course ' variance is routinely used within IR to assess the statistical significance of measurements. However, two systems can have statistically equivalent mean values of performance, yet exhibit quite different variances. In such a situation, we may prefer low/high volatile systems. For example, we can consider a minimum level for average performance, say MAP. Consequently, if MAP scores are smaller than such a threshold, we prefer volatile systems and hope to gain a satisfying AP scores at least for a part of topics, and if MAP scores are bigger than the threshold vice versa. Indeed such a strategy is consistent with the TREC robust track [2] where the main goal was to improve the consistency of systems evaluation by considering the impact of good and poorly performing topics equally. Another application of volatility may be in performance prediction. Here, performance volatility is due to several factors. Performance prediction involves a measurement step followed by a prediction step. During the measurement step, we are given a collection, a set of queries, and corresponding results together with relevance judgments. During the prediction step, we can consider three different scenarios. The first scenario predicts system performance on different topic sets (queries) but the same document collection as used during the measurement step. The second scenario predicts performance on a different document collection but the same topic set. The third scenario predicts system perforrnance for both a different topic set and document collection. Volatility may be useful in judging the quality of these predictions.
引用
收藏
页码:854 / 854
页数:1
相关论文
共 50 条
  • [11] Performance volatility, information availability, and disclosure reforms
    Fu, Renhui
    Gao, Fang
    Kim, Yong H.
    Qiu, Buhui
    JOURNAL OF BANKING & FINANCE, 2017, 75 : 35 - 52
  • [12] RETRIEVAL PERFORMANCE AND INFORMATION-THEORY
    GUAZZO, M
    INFORMATION PROCESSING & MANAGEMENT, 1977, 13 (03) : 155 - 165
  • [13] A study of aboutness in information retrieval
    Bruza, PD
    Huibers, TWC
    ARTIFICIAL INTELLIGENCE REVIEW, 1996, 10 (5-6) : 381 - 407
  • [14] Study of aboutness in information retrieval
    Queensland Univ of Technology, Queensland
    Artif Intell Rev, 5-6 (381-407):
  • [15] Boosting Biomedical Information Retrieval Performance through Citation Graph: An Empirical Study
    Yin, Xiaoshi
    Huang, Xiangji
    Hu, Qinmin
    Li, Zhoujun
    ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PROCEEDINGS, 2009, 5476 : 949 - +
  • [16] RETRIEVAL-An Online Performance Evaluation Tool for Information Retrieval Methods
    Ioannakis, George
    Koutsoudis, Anestis
    Pratikakis, Ioannis
    Chamzas, Christodoulos
    IEEE TRANSACTIONS ON MULTIMEDIA, 2018, 20 (01) : 119 - 127
  • [17] A mathematical investigation on retrieval performance evaluation measures of information retrieval algorithm
    Song, JF
    Zhang, WM
    Xiao, WD
    ITCC 2005: International Conference on Information Technology: Coding and Computing, Vol 1, 2005, : 806 - 810
  • [18] Effect of recognition errors on information retrieval performance
    Vinciarelli, A
    NINTH INTERNATIONAL WORKSHOP ON FRONTIERS IN HANDWRITING RECOGNITION, PROCEEDINGS, 2004, : 275 - 279
  • [19] Improving information retrieval performance by experience reuse
    Jéribi, L
    ELECTRONIC PUBLISHING '01, CONFERENCE PROCEEDINGS: 2001 IN THE DIGITAL PUBLISHING ODYSSEY, 2001, : 83 - 96
  • [20] Age differences in the performance of information retrieval tasks
    Freudenthal, D
    BEHAVIOUR & INFORMATION TECHNOLOGY, 2001, 20 (01) : 9 - 22