A Study on Performance Volatility in Information Retrieval

被引:0
|
作者
Hosseini, Mehdi [1 ]
机构
[1] UCL, Dept Comp Sci, London WC1E 6BT, England
关键词
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
A common practice in comparative evaluation of information retrieval (IR) systems is to create a test collection comprising a set of topics (queries), a document corpus, and relevance judgments, and to monitor the performance of retrieval systems over such a collection. A typical evaluation of a system involves computing a performance metric, e.g. ' Average Precision (AP), for each topic and then using the average performance metric, e.g., Mean Average Precision (MAP) to express the overall system performance. However, averages do not capture all the important aspects of system performance, and, used alone, may not thoroughly express system effectiveness. For example, the average can mask large variations in individual topic effectiveness. The author's hypothesis is that, in addition to average performance, attention needs to be paid to how a system performance varies across topics. We refer to this performance variation as Volatility. The main purpose of the thesis is to introduce the concept of performance volatility and apply it to information retrieval. There are several ways in which the volatility might be defined. One obvious definition is to use the standard deviation (SD) of the AP values from their MAP. An alternative definition might be compute the expected performance using a subset of queries, and then measure the deviation of held-out queries from this prediction. Another definition could be based on "interquartile range" [1]. Our initial investigation has used SD as a measure of volatility. Using SD to measure volatility has the benefit that it is a well-unders tood and well-studied quantity. However our preliminary experiments, which calculated a straightforward SD of per-topic performance scores, highlighted a problem. Typically, scores are bounded between [0,I). As a result, we observed that systems with low MAP exhibited lower volatility. This bias can be eliminated by applying a score standardization [3] or logit transformation to the AP values, in which case, the range of values is now (-infinity,+infinity). One application of volatility is in the evaluation of systems effectiveness. Following standard practices in experiment analysis, it is beneficial to consider both the mean and the volatility of performance (e.g. AP) across topics. Of course ' variance is routinely used within IR to assess the statistical significance of measurements. However, two systems can have statistically equivalent mean values of performance, yet exhibit quite different variances. In such a situation, we may prefer low/high volatile systems. For example, we can consider a minimum level for average performance, say MAP. Consequently, if MAP scores are smaller than such a threshold, we prefer volatile systems and hope to gain a satisfying AP scores at least for a part of topics, and if MAP scores are bigger than the threshold vice versa. Indeed such a strategy is consistent with the TREC robust track [2] where the main goal was to improve the consistency of systems evaluation by considering the impact of good and poorly performing topics equally. Another application of volatility may be in performance prediction. Here, performance volatility is due to several factors. Performance prediction involves a measurement step followed by a prediction step. During the measurement step, we are given a collection, a set of queries, and corresponding results together with relevance judgments. During the prediction step, we can consider three different scenarios. The first scenario predicts system performance on different topic sets (queries) but the same document collection as used during the measurement step. The second scenario predicts performance on a different document collection but the same topic set. The third scenario predicts system perforrnance for both a different topic set and document collection. Volatility may be useful in judging the quality of these predictions.
引用
收藏
页码:854 / 854
页数:1
相关论文
共 50 条
  • [31] AN INFORMATION RETRIEVAL SYSTEM IN STUDY OF REPTILES
    GILBOA, I
    DOWLING, HG
    TOXICON, 1970, 8 (02) : 133 - &
  • [32] An Empirical Study of SLDA for Information Retrieval
    Ma, Dashun
    Rao, Lan
    Wang, Ting
    INFORMATION RETRIEVAL TECHNOLOGY, 2011, 7097 : 84 - +
  • [33] Information retrieval approaches: A comparative study
    Moutaoukkil, Assmaa
    Idarrou, Ali
    Belahyane, Imane
    INTERNATIONAL JOURNAL OF ELECTRICAL AND COMPUTER ENGINEERING SYSTEMS, 2022, 13 (10) : 961 - 970
  • [34] Study of probability kinematics in information retrieval
    Univ of Glasgow, Glasgow, United Kingdom
    ACM Trans Inf Syst, 3 (225-255):
  • [35] BUDGETARY PARTICIPATION AND MANAGERIAL PERFORMANCE - THE IMPACT OF INFORMATION AND ENVIRONMENTAL VOLATILITY
    KREN, L
    ACCOUNTING REVIEW, 1992, 67 (03): : 511 - 526
  • [36] A Study of Information Retrieval Based on Ontology
    代金晶
    校园英语, 2017, (21) : 13 - 13
  • [37] A study of probability kinematics in information retrieval
    Crestani, F
    Van Rijsbergen, CJ
    ACM TRANSACTIONS ON INFORMATION SYSTEMS, 1998, 16 (03) : 225 - 255
  • [38] A study on the estimation of performance of the concept-based information retrieval model for searching the Web
    Noh, YH
    JOURNAL OF INFORMATION SCIENCE, 2002, 28 (05) : 407 - 415
  • [39] Enhancing information retrieval performance by using social analysis
    Khalifi, Hamid
    Dahir, Sarah
    El Qadi, Abderrahim
    Ghanou, Youssef
    SOCIAL NETWORK ANALYSIS AND MINING, 2020, 10 (01)
  • [40] Information Retrieval Performance on Story Recall in Normal Aging
    Park, Yu-Min
    Cho, Yoo-Jung
    Kim, Nayeon
    Lee, Jiho
    Park, Ki-Su
    Yoon, Janghyeok
    Ha, Ji-Wan
    COMMUNICATION SCIENCES AND DISORDERS-CSD, 2024, 29 (04): : 859 - 873