The role of unpaired image-to-image translation for stain color normalization in colorectal cancer histology classification

被引:9
|
作者
Altini, Nicola [1 ]
Marvulli, Tommaso Maria [3 ]
Zito, Francesco Alfredo [4 ]
Caputo, Mariapia [5 ]
Tommasi, Stefania [5 ]
Azzariti, Amalia [3 ]
Brunetti, Antonio [1 ,2 ]
Prencipe, Berardino [1 ]
Mattioli, Eliseo [4 ]
De Summa, Simona [5 ]
Bevilacqua, Vitoantonio [1 ,2 ]
机构
[1] Polytech Univ Bari, Dept Elect & Informat Engn DEI, Via Edoardo Orabona 4, I-70126 Bari, Italy
[2] Apulian Bioengn Srl, Via Violette 14, I-70026 Modugno, Italy
[3] IRCCS Ist Tumori Giovanni Paolo II, Lab Expt Pharmacol, Via O Flacco 65, I-70124 Bari, Italy
[4] IRCCS Ist Tumori Giovanni Paolo II, Pathol Dept, Via O Flacco 65, I-70124 Bari, Italy
[5] IRCCS Ist Tumori Giovanni Paolo II, Mol Diagnost & Pharmacogenet Unit, Via O Flacco 65, I-70124 Bari, Italy
关键词
Colorectal cancer; Generative adversarial network; Stain color normalization; Computer-aided diagnosis;
D O I
10.1016/j.cmpb.2023.107511
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Background: Histological assessment of colorectal cancer (CRC) tissue is a crucial and demanding task for pathologists. Unfortunately, manual annotation by trained specialists is a burdensome operation, which suffers from problems like intra-and inter-pathologist variability. Computational models are revolution-izing the Digital Pathology field, offering reliable and fast approaches for challenges like tissue segmen-tation and classification. With this respect, an important obstacle to overcome consists in stain color variations among different laboratories, which can decrease the performance of classifiers. In this work, we investigated the role of Unpaired Image-to-Image Translation (UI2IT) models for stain color normal-ization in CRC histology and compared to classical normalization techniques for Hematoxylin-Eosin (H&E) images. Methods: Five Deep Learning normalization models based on Generative Adversarial Networks (GANs) belonging to the UI2IT paradigm have been thoroughly compared to realize a robust stain color normal-ization pipeline. To avoid the need for training a style transfer GAN between each pair of data domains, in this paper we introduce the concept of training by exploiting a meta-domain, which contains data coming from a wide variety of laboratories. The proposed framework enables a huge saving in terms of training time, by allowing to train a single image normalization model for a target laboratory. To prove the applicability of the proposed workflow in the clinical practice, we conceived a novel perceptive qual-ity measure, which we defined as Pathologist Perceptive Quality (PPQ). The second stage involved the classification of tissue types in CRC histology, where deep features extracted from Convolutional Neural Networks have been exploited to realize a Computer-Aided Diagnosis system based on a Support Vector Machine (SVM). To prove the reliability of the system on new data, an external validation set composed of N = 15,857 tiles has been collected at IRCCS Istituto Tumori "Giovanni Paolo II". Results: The exploitation of a meta-domain consented to train normalization models that allowed achiev-ing better classification results than normalization models explicitly trained on the source domain. PPQ metric has been found correlated to quality of distributions (Frechet Inception Distance - FID) and to similarity of the transformed image to the original one (Learned Perceptual Image Patch Similarity - LPIPS), thus showing that GAN quality measures introduced in natural image processing tasks can be linked to pathologist evaluation of H&E images. Furthermore, FID has been found correlated to accuracies of the downstream classifiers. The SVM trained on DenseNet201 features allowed to obtain the highest classification results in all configurations. The normalization method based on the fast variant of CUT (Contrastive Unpaired Translation), FastCUT, trained with the meta-domain paradigm, allowed to achieve the best classification result for the downstream task and, correspondingly, showed the highest FID on the classification dataset.
引用
收藏
页数:18
相关论文
共 50 条
  • [21] Homomorphic Latent Space Interpolation for Unpaired Image-To-Image Translation
    Chen, Ying-Cong
    Xu, Xiaogang
    Tian, Zhuotao
    Jia, Jiaya
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 2403 - 2411
  • [22] Exploring Negatives in Contrastive Learning for Unpaired Image-to-Image Translation
    Lin, Yupei
    Zhang, Sen
    Chen, Tianshui
    Lu, Yongyi
    Li, Guangping
    Shi, Yukai
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 1186 - 1194
  • [23] UNPAIRED IMAGE-TO-IMAGE SHAPE TRANSLATION ACROSS FASHION DATA
    Wang, Kaili
    Ma, Liqian
    Oramas, Jose M.
    Van Gool, Luc
    Tuytelaars, Tinne
    2020 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2020, : 206 - 210
  • [24] Rethinking the Paradigm of Content Constraints in Unpaired Image-to-Image Translation
    Cai, Xiuding
    Zhu, Yaoyao
    Miao, Dong
    Fu, Linjie
    Yao, Yu
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 2, 2024, : 891 - 899
  • [25] Cross-Domain Interpolation for Unpaired Image-to-Image Translation
    Lopez, Jorge
    Mauricio, Antoni
    Diaz, Jose
    Camara, Guillermo
    COMPUTER VISION SYSTEMS (ICVS 2019), 2019, 11754 : 542 - 551
  • [26] Unpaired Image-to-Image Translation via Latent Energy Transport
    Zhao, Yang
    Chen, Changyou
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 16413 - 16422
  • [27] One-to-one Mapping for Unpaired Image-to-image Translation
    Shen, Zengming
    Chen, Yifan
    Huang, Thomas S.
    Zhou, S. Kevin
    Georgescu, Bogdan
    Liu, Xuqi
    2020 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2020, : 1159 - 1168
  • [28] Maximum Spatial Perturbation Consistency for Unpaired Image-to-Image Translation
    Xu, Yanwu
    Xie, Shaoan
    Wu, Wenhao
    Zhang, Kun
    Gong, Mingming
    Batmanghelich, Kayhan
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 18290 - 18299
  • [29] DehazeGAN: Underwater Haze Image Restoration using Unpaired Image-to-image Translation
    Cho, Younggun
    Malav, Ramavtar
    Pandey, Gaurav
    Kim, Ayoung
    IFAC PAPERSONLINE, 2019, 52 (21): : 82 - 85
  • [30] Multi-feature contrastive learning for unpaired image-to-image translation
    Yao Gou
    Min Li
    Yu Song
    Yujie He
    Litao Wang
    Complex & Intelligent Systems, 2023, 9 : 4111 - 4122