Cross-View Label Transfer in Knee MR Segmentation Using Iterative Context Learning

被引:1
|
作者
Li, Tong [1 ,3 ]
Xuan, Kai [2 ]
Xue, Zhong [3 ]
Chen, Lei [3 ]
Zhang, Lichi [2 ]
Qian, Dahong [4 ]
机构
[1] Shanghai Jiao Tong Univ, Sch Biomed Engn, Shanghai, Peoples R China
[2] Shanghai Jiao Tong Univ, Inst Med Imaging Technol, Sch Biomed Engn, Shanghai, Peoples R China
[3] Shanghai United Imaging Intelligence Co Ltd, Shanghai, Peoples R China
[4] Shanghai Jiao Tong Univ, Inst Med Robot, Shanghai, Peoples R China
关键词
Knee MR images; Multi-view segmentation; Label transfer; Iterative context learning;
D O I
10.1007/978-3-030-60548-3_10
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
MR images of knee joint are usually collected in axial, coronal, and sagittal views with large slice spacing for clinical study. Current methods either segment images in different views separately or apply super-resolution fusion before 3D segmentation. Knee images segmentation transfer between different views is still an open problem. Moreover, the majority of manual labelling works focus on the sagittal-view, and practically it is hard to collect label maps for the coronal- and axial-views, which are also invaluable for observing knee injuries. In this paper, we propose a novel algorithm to transfer sagittal-view annotations to the other views. First, we build a supervised low-resolution segmentation (LR-Seg) module based on the down-sampled sagittal-view slices to obtain the label map on the target view. And then a context transfer module is proposed to refine the segmentations using target-view context. Then by iterative learning of these two modules, the context from one result can be used to guide the training of the other. Experimental results show that our algorithm can greatly alleviate the burden of manually labeling works from clinicians and gain comparable segmentation results on axial and coronal views.
引用
收藏
页码:96 / 105
页数:10
相关论文
共 50 条
  • [21] CrossCBR: Cross-view Contrastive Learning for Bundle Recommendation
    Ma, Yunshan
    He, Yingzhi
    Zhang, An
    Wang, Xiang
    Chua, Tat-Seng
    PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022, 2022, : 1233 - 1241
  • [22] An iterative transfer learning framework for cross-domain tongue segmentation
    Li, Lei
    Luo, Zhiming
    Zhang, Mengting
    Cai, Yuanzheng
    Li, Candong
    Li, Shaozi
    CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2020, 32 (14):
  • [23] Cross-view gait recognition through ensemble learning
    Wang, Xiuhui
    Yan, Wei Qi
    NEURAL COMPUTING & APPLICATIONS, 2020, 32 (11): : 7275 - 7287
  • [24] A Recommendation Algorithm Based on Cross-View Contrastive Learning
    Wang, Yuying
    Zhou, Jing
    Liu, Qian
    27TH IEEE/ACIS INTERNATIONAL SUMMER CONFERENCE ON SOFTWARE ENGINEERING ARTIFICIAL INTELLIGENCE NETWORKING AND PARALLEL/DISTRIBUTED COMPUTING, SNPD 2024-SUMMER, 2024, : 177 - 182
  • [25] Cross-view gait recognition through ensemble learning
    Xiuhui Wang
    Wei Qi Yan
    Neural Computing and Applications, 2020, 32 : 7275 - 7287
  • [26] Knowledge Graph Cross-View Contrastive Learning for Recommendation
    Meng, Zeyuan
    Ounis, Iadh
    Macdonald, Craig
    Yi, Zixuan
    ADVANCES IN INFORMATION RETRIEVAL, ECIR 2024, PT III, 2024, 14610 : 3 - 18
  • [27] Cross-View Gait Recognition by Discriminative Feature Learning
    Zhang, Yuqi
    Huang, Yongzhen
    Yu, Shiqi
    Wang, Liang
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 : 1001 - 1015
  • [28] Cross-view Activity Recognition using Hankelets
    Li, Binlong
    Camps, Octavia I.
    Sznaier, Mario
    2012 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2012, : 1362 - 1369
  • [29] Learning hash functions for cross-view similarity search
    Microsoft Research India, Bangalore, India
    IJCAI Int. Joint Conf. Artif. Intell., 1600, (1360-1365):
  • [30] Deep cross-view autoencoder network for multi-view learning
    Mi, Jian-Xun
    Fu, Chang-Qing
    Chen, Tao
    Gou, Tingting
    MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (17) : 24645 - 24664