Pre-training in Medical Data: A Survey

被引:7
|
作者
Qiu, Yixuan [1 ]
Lin, Feng [1 ]
Chen, Weitong [2 ]
Xu, Miao [1 ]
机构
[1] Univ Queensland, Brisbane 4072, Australia
[2] Univ Adelaide, Adelaide 5005, Australia
关键词
Medical data; pre-training; transfer learning; self-supervised learning; medical image data; electrocardiograms (ECG) data; CONVOLUTIONAL NEURAL-NETWORKS; BRAIN-TUMOR CLASSIFICATION; OPEN ACCESS DATABASE; RESEARCH RESOURCE; DEEP; RECOGNITION; ALGORITHMS; SIGNALS; CANCER; HEALTH;
D O I
10.1007/s11633-022-1382-8
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Medical data refers to health-related information associated with regular patient care or as part of a clinical trial program. There are many categories of such data, such as clinical imaging data, bio-signal data, electronic health records (EHR), and multi-modality medical data. With the development of deep neural networks in the last decade, the emerging pre-training paradigm has become dominant in that it has significantly improved machine learning methods' performance in a data-limited scenario. In recent years, studies of pre-training in the medical domain have achieved significant progress. To summarize these technology advancements, this work provides a comprehensive survey of recent advances for pre-training on several major types of medical data. In this survey, we summarize a large number of related publications and the existing benchmarking in the medical domain. Especially, the survey briefly describes how some pre-training methods are applied to or developed for medical data. From a data-driven perspective, we examine the extensive use of pre-training in many medical scenarios. Moreover, based on the summary of recent pre-training studies, we identify several challenges in this field to provide insights for future studies.
引用
收藏
页码:147 / 179
页数:33
相关论文
共 50 条
  • [1] Rethinking pre-training on medical imaging
    Wen, Yang
    Chen, Leiting
    Deng, Yu
    Zhou, Chuan
    [J]. JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2021, 78
  • [2] Event Camera Data Pre-training
    Yang, Yan
    Pan, Liyuan
    Liu, Liu
    [J]. 2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 10665 - 10675
  • [3] Survey on Vision-language Pre-training
    Yin, Jiong
    Zhang, Zhe-Dong
    Gao, Yu-Han
    Yang, Zhi-Wen
    Li, Liang
    Xiao, Mang
    Sun, Yao-Qi
    Yan, Cheng-Gang
    [J]. Ruan Jian Xue Bao/Journal of Software, 2023, 34 (05): : 2000 - 2023
  • [4] VLP: A Survey on Vision-language Pre-training
    Chen, Fei-Long
    Zhang, Du-Zhen
    Han, Ming-Lun
    Chen, Xiu-Yi
    Shi, Jing
    Xu, Shuang
    Xu, Bo
    [J]. MACHINE INTELLIGENCE RESEARCH, 2023, 20 (01) : 38 - 56
  • [5] VLP: A Survey on Vision-language Pre-training
    Fei-Long Chen
    Du-Zhen Zhang
    Ming-Lun Han
    Xiu-Yi Chen
    Jing Shi
    Shuang Xu
    Bo Xu
    [J]. Machine Intelligence Research, 2023, 20 (01) : 38 - 56
  • [6] VLP: A Survey on Vision-language Pre-training
    Fei-Long Chen
    Du-Zhen Zhang
    Ming-Lun Han
    Xiu-Yi Chen
    Jing Shi
    Shuang Xu
    Bo Xu
    [J]. Machine Intelligence Research, 2023, 20 : 38 - 56
  • [7] Application Specificity of Data for Pre-Training in Computer Vision
    Peters, Gabriel G.
    Couwenhoven, Scott D.
    Walvoord, Derek J.
    Salvaggio, Carl
    [J]. DISRUPTIVE TECHNOLOGIES IN INFORMATION SCIENCES VIII, 2024, 13058
  • [8] ELLE: Efficient Lifelong Pre-training for Emerging Data
    Qin, Yujia
    Zhang, Jiajie
    Lin, Yankai
    Liu, Zhiyuan
    Li, Peng
    Sun, Maosong
    Zhou, Jie
    [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), 2022, : 2789 - 2810
  • [9] Survey: Transformer based video-language pre-training
    Ruan, Ludan
    Jin, Qin
    [J]. AI OPEN, 2022, 3 : 1 - 13
  • [10] Pre-training on Grayscale ImageNet Improves Medical Image Classification
    Xie, Yiting
    Richmond, David
    [J]. COMPUTER VISION - ECCV 2018 WORKSHOPS, PT VI, 2019, 11134 : 476 - 484