Application of Threshold-bias Independent Analysis to Eye-tracking and FROC Data

被引:3
|
作者
Chakraborty, Dev P. [1 ]
Yoon, Hong-Jun
Mello-Thoms, Claudia [1 ,2 ]
机构
[1] Univ Pittsburgh, Dept Radiol, Sch Med, Pittsburgh, PA 15213 USA
[2] Univ Pittsburgh, Dept Biomed Informat, Sch Med, Pittsburgh, PA 15213 USA
关键词
Visual search; observer performance; eye-tracking; figures-of-merit; agreement; threshold-bias; OBSERVER PERFORMANCE; VISUAL-SEARCH; DECISION-MAKING; ROC CURVES; MODEL; VARIABILITY; METHODOLOGY; MAMMOGRAMS; PARADIGM; POSITION;
D O I
10.1016/j.acra.2012.09.002
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
Rationale and Objectives: Studies of medical image interpretation have focused on either assessing radiologists' performance using, for example, the receiver operating characteristic (ROC) paradigm, or assessing the interpretive process by analyzing their eye-tracking (ET) data. Analysis of ET data has not benefited from threshold-bias independent figures of merit (FOMs) analogous to the area under the receiver operating characteristic (ROC) curve. The aim was to demonstrate the feasibility of such FOMs and to measure the agreement between FOMs derived from free-response ROC (FROC) and ET data. Methods: Eight expert breast radiologists interpreted a case set of 120 two-view mammograms while eye-position data and FROC data were continuously collected during the interpretation interval. Regions that attract prolonged (>800 ms) visual attention were considered to be virtual marks, and ratings based on the dwell and approach-rate (inverse of time-to-hit) were assigned to them. The virtual ratings were used to define threshold-bias independent FOMs in a manner analogous to the area under the trapezoidal alternative FROC (AFROC) curve (0 = worst, 1 = best). Agreement at the case level (0.5 = chance, 1 = perfect) was measured using the jackknife and 95% confidence intervals (Cl) for the FOMs and agreement were estimated using the bootstrap. Results: The AFROC mark-ratings' FOM was largest at 0.734 (Cl 0.65-0.81) followed by the dwell at 0.460 (0.34-0.59) and then by the approach-rate FOM 0.336 (0.25-0.46). The differences between the FROC mark-ratings' FOM and the perceptual FOMs were significant (P<.05). All pairwise agreements were significantly better then chance: ratings vs. dwell 0.707 (0.63-0.88), dwell vs. approach-rate 0.703 (0.60-0.79) and rating vs. approach-rate 0.606 (0.53-0.68). The ratings vs. approach-rate agreement was significantly smaller than the dwell vs. approach-rate agreement (P=.008). Conclusions: Leveraging current methods developed for analyzing observer performance data could complement current ways of analyzing ET data and lead to new insights.
引用
收藏
页码:1474 / 1483
页数:10
相关论文
共 50 条
  • [41] Volume Composition and Evaluation Using Eye-Tracking Data
    Lu, Aidong
    Maciejewski, Ross
    Ebert, David S.
    ACM TRANSACTIONS ON APPLIED PERCEPTION, 2010, 7 (01)
  • [42] Computational analysis of visual complexity and aesthetic appraisal reflected in eye-tracking data
    Paramei, Galina
    Kirpichnikova, Anna
    Blakeway, Stewart
    Foulkes, Andrew
    Chassy, Philippe
    PERCEPTION, 2015, 44 : 68 - 68
  • [43] Classifying Eye-Tracking Data Using Saliency Maps
    Rahman, Shafin
    Rahman, Sejuti
    Shahid, Omar
    Abdullah, Md Tahmeed
    Sourov, Jubair Ahmed
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 9288 - 9295
  • [44] Mixture models for eye-tracking data: A case study
    Pauler, DK
    Escobar, MD
    Sweeney, JA
    Greenhouse, J
    STATISTICS IN MEDICINE, 1996, 15 (13) : 1365 - 1376
  • [45] EYE-TRACKING DATA EXPLORATION FOR INTERACTIVE EVOLUTIONARY ALGORITHMS
    Cremene, Marcel
    Sabou, Ovidiu
    Pallez, Denis
    Baccino, Thierry
    KEPT 2009: KNOWLEDGE ENGINEERING PRINCIPLES AND TECHNIQUES, 2009, : 151 - +
  • [46] Detecting Personality Traits Using Eye-Tracking Data
    Berkovsky, Shlomo
    Taib, Ronnie
    Koprinska, Irena
    Wang, Eileen
    Zeng, Yucheng
    Li, Jingjie
    Kleitman, Sabina
    CHI 2019: PROCEEDINGS OF THE 2019 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, 2019,
  • [47] Learning to detect objects from eye-tracking data
    Papadopoulous, D. P.
    Clarke, A. D. F.
    Keller, F.
    Ferrari, V.
    I-PERCEPTION, 2014, 5 (05):
  • [48] Collecting and Analyzing Eye-tracking Data in Outdoor Environments
    Evans, Karen M.
    Jacobs, Robert A.
    Tarduno, John A.
    Pelz, Jeff B.
    JOURNAL OF EYE MOVEMENT RESEARCH, 2012, 5 (02):
  • [49] Visual Data Cleansing of Low-Level Eye-Tracking Data
    Schulz, Christoph
    Burch, Michael
    Beck, Fabian
    Weiskopf, Daniel
    EYE TRACKING AND VISUALIZATION: FOUNDATIONS, TECHNIQUES, AND APPLICATIONS, ETVIS 2015, 2017, : 199 - 216
  • [50] Measuring attentional bias in children with prominent ears: A prospective eye-tracking study
    Haworth, Rebecca
    Sobey, Stephanie
    Chorney, Jill M.
    Bezuhly, Michael
    Hong, Paul
    JOURNAL OF PLASTIC RECONSTRUCTIVE AND AESTHETIC SURGERY, 2015, 68 (12): : 1662 - 1666