Unsupervised Detection and Correction of Model Calibration Shift at Test-Time

被引:0
|
作者
Shashikumar, Supreeth P. [1 ]
Amrollahi, Fatemeh [1 ]
Nemati, Shamim [1 ]
机构
[1] Univ Calif San Diego, Div Biomed Informat, La Jolla, CA 92093 USA
基金
美国国家卫生研究院;
关键词
INTERNATIONAL CONSENSUS DEFINITIONS; VALIDATION; SEPSIS;
D O I
10.1109/EMBC40787.2023.10341086
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The wide adoption of predictive models into clinical practice require generalizability across hospitals and maintenance of consistent performance across time. Model calibration shift, caused by factors such as changes in prevalence rates or data distribution shift, can affect the generalizability of such models. In this work, we propose a model calibration detection and correction (CaDC) method, specifically designed to utilize only unlabeled data at a target hospital. The proposed method is very flexible and can be used alongside any deep learning-based clinical predictive model. As a case study, we focus on the problem of detecting and correcting model calibration shift in the context of early prediction of sepsis. Three patient cohorts consisting of 545,089 adult patients admitted to the emergency departments at three geographically diverse healthcare systems in the United States were used to train and externally validate the proposed method. We successfully show that utilizing the CaDC model can help assist the sepsis prediction model in achieving a predefined positive predictive value (PPV). For instance, when trained to achieve a PPV of 20%, the performance of the sepsis prediction model with and without the calibration shift estimation model was 18.0% vs 12.9% and 23.1% vs 13.4% at the two external validation cohorts, respectively. As such, the proposed CaDC method has potential applications in maintaining performance claims of predictive models deployed across hospital systems.
引用
收藏
页数:4
相关论文
共 50 条
  • [1] Boosting anomaly detection using unsupervised diverse test-time augmentation
    Cohen, Seffi
    Goldshlager, Niv
    Rokach, Lior
    Shapira, Bracha
    INFORMATION SCIENCES, 2023, 626 : 821 - 836
  • [2] When Model Meets New Normals: Test-Time Adaptation for Unsupervised Time-Series Anomaly Detection
    Kim, Dongmin
    Park, Sunghyun
    Choo, Jaegul
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 12, 2024, : 13113 - 13121
  • [3] Test-Time Adaptation with Calibration of Medical Image Classification Nets for Label Distribution Shift
    Ma, Wenao
    Chen, Cheng
    Zheng, Shuang
    Qin, Jing
    Zhang, Huimao
    Dou, Qi
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2022, PT III, 2022, 13433 : 313 - 323
  • [4] TTTFlow: Unsupervised Test-Time Training with Normalizing Flow
    Osowiechi, David
    Hakim, Gustavo A. Vargas
    Noori, Mehrdad
    Cheraghalikhani, Milad
    Ben Ayed, Ismail
    Desrosiers, Christian
    2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, : 2125 - 2133
  • [5] A Joint Training-Calibration Framework for Test-Time Personalization with Label Shift in Federated Learning
    Xu, Jian
    Huang, Shao-Lun
    PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023, 2023, : 4370 - 4374
  • [6] Dual-Path Adversarial Lifting for Domain Shift Correction in Online Test-Time Adaptation
    Tang, Yushun
    Chen, Shuoshuo
    Lu, Zhihe
    Wang, Xinchao
    He, Zhihai
    COMPUTER VISION - ECCV 2024, PT LXVII, 2025, 15125 : 342 - 359
  • [7] TeST: Test-time Self-Training under Distribution Shift
    Sinha, Samarth
    Gehler, Peter
    Locatello, Francesco
    Schiele, Bernt
    2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, : 2758 - 2768
  • [8] An insider threat detection method based on improved Test-Time Training model
    Tao, Xiaoling
    Liu, Jianxiang
    Yu, Yuelin
    Zhang, Haijing
    Huang, Ying
    HIGH-CONFIDENCE COMPUTING, 2025, 5 (01):
  • [9] TTANAD: Test-Time Augmentation for Network Anomaly Detection
    Cohen, Seffi
    Goldshlager, Niv
    Shapira, Bracha
    Rokach, Lior
    ENTROPY, 2023, 25 (05)
  • [10] Test-Time Poisoning Attacks Against Test-Time Adaptation Models
    Cong, Tianshuo
    He, Xinlei
    Shen, Yun
    Zhang, Yang
    45TH IEEE SYMPOSIUM ON SECURITY AND PRIVACY, SP 2024, 2024, : 1306 - 1324