A Study on Performance Volatility in Information Retrieval

被引:0
|
作者
Hosseini, Mehdi [1 ]
机构
[1] UCL, Dept Comp Sci, London WC1E 6BT, England
关键词
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
A common practice in comparative evaluation of information retrieval (IR) systems is to create a test collection comprising a set of topics (queries), a document corpus, and relevance judgments, and to monitor the performance of retrieval systems over such a collection. A typical evaluation of a system involves computing a performance metric, e.g. ' Average Precision (AP), for each topic and then using the average performance metric, e.g., Mean Average Precision (MAP) to express the overall system performance. However, averages do not capture all the important aspects of system performance, and, used alone, may not thoroughly express system effectiveness. For example, the average can mask large variations in individual topic effectiveness. The author's hypothesis is that, in addition to average performance, attention needs to be paid to how a system performance varies across topics. We refer to this performance variation as Volatility. The main purpose of the thesis is to introduce the concept of performance volatility and apply it to information retrieval. There are several ways in which the volatility might be defined. One obvious definition is to use the standard deviation (SD) of the AP values from their MAP. An alternative definition might be compute the expected performance using a subset of queries, and then measure the deviation of held-out queries from this prediction. Another definition could be based on "interquartile range" [1]. Our initial investigation has used SD as a measure of volatility. Using SD to measure volatility has the benefit that it is a well-unders tood and well-studied quantity. However our preliminary experiments, which calculated a straightforward SD of per-topic performance scores, highlighted a problem. Typically, scores are bounded between [0,I). As a result, we observed that systems with low MAP exhibited lower volatility. This bias can be eliminated by applying a score standardization [3] or logit transformation to the AP values, in which case, the range of values is now (-infinity,+infinity). One application of volatility is in the evaluation of systems effectiveness. Following standard practices in experiment analysis, it is beneficial to consider both the mean and the volatility of performance (e.g. AP) across topics. Of course ' variance is routinely used within IR to assess the statistical significance of measurements. However, two systems can have statistically equivalent mean values of performance, yet exhibit quite different variances. In such a situation, we may prefer low/high volatile systems. For example, we can consider a minimum level for average performance, say MAP. Consequently, if MAP scores are smaller than such a threshold, we prefer volatile systems and hope to gain a satisfying AP scores at least for a part of topics, and if MAP scores are bigger than the threshold vice versa. Indeed such a strategy is consistent with the TREC robust track [2] where the main goal was to improve the consistency of systems evaluation by considering the impact of good and poorly performing topics equally. Another application of volatility may be in performance prediction. Here, performance volatility is due to several factors. Performance prediction involves a measurement step followed by a prediction step. During the measurement step, we are given a collection, a set of queries, and corresponding results together with relevance judgments. During the prediction step, we can consider three different scenarios. The first scenario predicts system performance on different topic sets (queries) but the same document collection as used during the measurement step. The second scenario predicts performance on a different document collection but the same topic set. The third scenario predicts system perforrnance for both a different topic set and document collection. Volatility may be useful in judging the quality of these predictions.
引用
收藏
页码:854 / 854
页数:1
相关论文
共 50 条
  • [21] Using contextual information to improve retrieval performance
    Huang, XJ
    Huang, YR
    2005 IEEE INTERNATIONAL CONFERENCE ON GRANULAR COMPUTING, VOLS 1 AND 2, 2005, : 474 - 481
  • [22] Geosemantic information retrieval and its performance evaluation
    Gu, Mi Sug
    Hwang, Jaehong
    JOURNAL OF INFORMATION SCIENCE, 2015, 41 (05) : 705 - 719
  • [23] Performance comparison of language models for information retrieval
    Dai, SX
    Diao, Q
    Zhou, CL
    Artificial Intelligence Applications and Innovations II, 2005, 187 : 721 - 730
  • [24] Performance predication of data fusion for information retrieval
    Wu, SL
    McClean, S
    INFORMATION PROCESSING & MANAGEMENT, 2006, 42 (04) : 899 - 915
  • [25] Design, Performance and Analysis of Innovative Information Retrieval
    Wical, Stephanie H.
    ASLIB JOURNAL OF INFORMATION MANAGEMENT, 2014, 66 (02) : 241 - 242
  • [26] Complete performance graphs in probabilistic information retrieval
    Sebe, N
    Huijsmans, DP
    Tian, Q
    Gevers, T
    ADVANCES IN MULTIMEDIA INFORMATION PROCESSING - PCM 2004, PT 2, PROCEEDINGS, 2004, 3332 : 229 - 237
  • [27] Enabling Performance Prediction in Information Retrieval Evaluation
    Faggioli, Guglielmo
    SIGIR '21 - PROCEEDINGS OF THE 44TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, 2021, : 2701 - 2701
  • [28] PREDICTING RECALL PERFORMANCE OF AN INFORMATION RETRIEVAL SYSTEM
    BAGDOYAN, HE
    PROCEEDINGS OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE, 1973, 10 : 5 - 6
  • [29] Information and volatility
    Bergemann, Dirk
    Heumann, Tibor
    Morris, Stephen
    JOURNAL OF ECONOMIC THEORY, 2015, 158 : 427 - 465
  • [30] A Study of Retrieval Models for Long Documents and Queries in Information Retrieval
    Cummins, Ronan
    PROCEEDINGS OF THE 25TH INTERNATIONAL CONFERENCE ON WORLD WIDE WEB (WWW'16), 2016, : 795 - 805