Learning with limited target data to detect cells in cross-modality images

被引:1
|
作者
Xing, Fuyong [1 ]
Yang, Xinyi [1 ]
Cornish, Toby C. [2 ]
Ghosh, Debashis [1 ]
机构
[1] Univ Colorado, Dept Biostat & Informat, Anschutz Med Campus,13001 17th Pl, Aurora, CO 80045 USA
[2] Univ Colorado, Dept Pathol, Anschutz Med Campus,13001 17th Pl, Aurora, CO 80045 USA
基金
美国国家卫生研究院;
关键词
Cell detection; Nucleus detection; Microscopy images; GAN; Domain adaptation; Low resource; DOMAIN ADAPTATION; SEGMENTATION; NUCLEI;
D O I
10.1016/j.media.2023.102969
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural networks have achieved excellent cell or nucleus quantification performance in microscopy images, but they often suffer from performance degradation when applied to cross-modality imaging data. Unsupervised domain adaptation (UDA) based on generative adversarial networks (GANs) has recently improved the performance of cross-modality medical image quantification. However, current GAN-based UDA methods typically require abundant target data for model training, which is often very expensive or even impossible to obtain for real applications. In this paper, we study a more realistic yet challenging UDA situation, where (unlabeled) target training data is limited and previous work seldom delves into cell identification. We first enhance a dual GAN with task-specific modeling, which provides additional supervision signals to assist with generator learning. We explore both single-directional and bidirectional task-augmented GANs for domain adaptation. Then, we further improve the GAN by introducing a differentiable, stochastic data augmentation module to explicitly reduce discriminator overfitting. We examine source-, target-, and dual-domain data augmentation for GAN enhancement, as well as joint task and data augmentation in a unified GAN-based UDA framework. We evaluate the framework for cell detection on multiple public and in-house microscopy image datasets, which are acquired with different imaging modalities, staining protocols and/or tissue preparations. The experiments demonstrate that our method significantly boosts performance when compared with the reference baseline, and it is superior to or on par with fully supervised models that are trained with real target annotations. In addition, our method outperforms recent state-of-the-art UDA approaches by a large margin on different datasets.
引用
收藏
页数:12
相关论文
共 50 条
  • [1] Learning Cross-modality Similarity for Multinomial Data
    Jia, Yangqing
    Salzmann, Mathieu
    Darrell, Trevor
    [J]. 2011 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2011, : 2407 - 2414
  • [2] A Cross-Modality Learning Approach for Vessel Segmentation in Retinal Images
    Li, Qiaoliang
    Feng, Bowei
    Xie, LinPei
    Liang, Ping
    Zhang, Huisheng
    Wang, Tianfu
    [J]. IEEE TRANSACTIONS ON MEDICAL IMAGING, 2016, 35 (01) : 109 - 118
  • [3] A Cross-Modality Feature Transfer Method for Target Detection in SAR Images
    He, Jiayue
    Su, Nan
    Xu, Congan
    Liao, Yanping
    Yan, Yiming
    Zhao, Chunhui
    Hou, Wei
    Feng, Shou
    [J]. IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61
  • [4] Learning Cross-Modality Representations From Multi-Modal Images
    van Tulder, Gijs
    de Bruijne, Marleen
    [J]. IEEE TRANSACTIONS ON MEDICAL IMAGING, 2019, 38 (02) : 638 - 648
  • [5] Representation Learning for Cross-Modality Classification
    van Tulder, Gijs
    de Bruijne, Marleen
    [J]. MEDICAL COMPUTER VISION AND BAYESIAN AND GRAPHICAL MODELS FOR BIOMEDICAL IMAGING, 2017, 10081 : 126 - 136
  • [6] Blood vessel segmentation of fundus images via cross-modality dictionary learning
    Yang, Yan
    Shao, Feng
    Fu, Zhenqi
    Fu, Randi
    [J]. APPLIED OPTICS, 2018, 57 (25) : 7287 - 7295
  • [7] Diverse data augmentation for learning image segmentation with cross-modality annotations
    Chen, Xu
    Lian, Chunfeng
    Wang, Li
    Deng, Hannah
    Kuang, Tianshu
    Fung, Steve H.
    Gateno, Jaime
    Shen, Dinggang
    Xia, James J.
    Yap, Pew-Thian
    [J]. MEDICAL IMAGE ANALYSIS, 2021, 71
  • [8] Cross-Modality Data Augmentation for Aerial Object Detection with Representation Learning
    Wei, Chiheng
    Bai, Lianfa
    Chen, Xiaoyu
    Han, Jing
    [J]. Remote Sensing, 2024, 16 (24)
  • [9] Cross-Modality Learning by Exploring Modality Interactions for Emotion Reasoning
    Tran, Thi-Dung
    Ho, Ngoc-Huynh
    Pant, Sudarshan
    Yang, Hyung-Jeong
    Kim, Soo-Hyung
    Lee, Gueesang
    [J]. IEEE ACCESS, 2023, 11 : 56634 - 56648
  • [10] Cross-modality collaborative learning identified pedestrian
    Wen, Xiongjun
    Feng, Xin
    Li, Ping
    Chen, Wenfang
    [J]. VISUAL COMPUTER, 2023, 39 (09): : 4117 - 4132