The role of unpaired image-to-image translation for stain color normalization in colorectal cancer histology classification

被引:9
|
作者
Altini, Nicola [1 ]
Marvulli, Tommaso Maria [3 ]
Zito, Francesco Alfredo [4 ]
Caputo, Mariapia [5 ]
Tommasi, Stefania [5 ]
Azzariti, Amalia [3 ]
Brunetti, Antonio [1 ,2 ]
Prencipe, Berardino [1 ]
Mattioli, Eliseo [4 ]
De Summa, Simona [5 ]
Bevilacqua, Vitoantonio [1 ,2 ]
机构
[1] Polytech Univ Bari, Dept Elect & Informat Engn DEI, Via Edoardo Orabona 4, I-70126 Bari, Italy
[2] Apulian Bioengn Srl, Via Violette 14, I-70026 Modugno, Italy
[3] IRCCS Ist Tumori Giovanni Paolo II, Lab Expt Pharmacol, Via O Flacco 65, I-70124 Bari, Italy
[4] IRCCS Ist Tumori Giovanni Paolo II, Pathol Dept, Via O Flacco 65, I-70124 Bari, Italy
[5] IRCCS Ist Tumori Giovanni Paolo II, Mol Diagnost & Pharmacogenet Unit, Via O Flacco 65, I-70124 Bari, Italy
关键词
Colorectal cancer; Generative adversarial network; Stain color normalization; Computer-aided diagnosis;
D O I
10.1016/j.cmpb.2023.107511
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Background: Histological assessment of colorectal cancer (CRC) tissue is a crucial and demanding task for pathologists. Unfortunately, manual annotation by trained specialists is a burdensome operation, which suffers from problems like intra-and inter-pathologist variability. Computational models are revolution-izing the Digital Pathology field, offering reliable and fast approaches for challenges like tissue segmen-tation and classification. With this respect, an important obstacle to overcome consists in stain color variations among different laboratories, which can decrease the performance of classifiers. In this work, we investigated the role of Unpaired Image-to-Image Translation (UI2IT) models for stain color normal-ization in CRC histology and compared to classical normalization techniques for Hematoxylin-Eosin (H&E) images. Methods: Five Deep Learning normalization models based on Generative Adversarial Networks (GANs) belonging to the UI2IT paradigm have been thoroughly compared to realize a robust stain color normal-ization pipeline. To avoid the need for training a style transfer GAN between each pair of data domains, in this paper we introduce the concept of training by exploiting a meta-domain, which contains data coming from a wide variety of laboratories. The proposed framework enables a huge saving in terms of training time, by allowing to train a single image normalization model for a target laboratory. To prove the applicability of the proposed workflow in the clinical practice, we conceived a novel perceptive qual-ity measure, which we defined as Pathologist Perceptive Quality (PPQ). The second stage involved the classification of tissue types in CRC histology, where deep features extracted from Convolutional Neural Networks have been exploited to realize a Computer-Aided Diagnosis system based on a Support Vector Machine (SVM). To prove the reliability of the system on new data, an external validation set composed of N = 15,857 tiles has been collected at IRCCS Istituto Tumori "Giovanni Paolo II". Results: The exploitation of a meta-domain consented to train normalization models that allowed achiev-ing better classification results than normalization models explicitly trained on the source domain. PPQ metric has been found correlated to quality of distributions (Frechet Inception Distance - FID) and to similarity of the transformed image to the original one (Learned Perceptual Image Patch Similarity - LPIPS), thus showing that GAN quality measures introduced in natural image processing tasks can be linked to pathologist evaluation of H&E images. Furthermore, FID has been found correlated to accuracies of the downstream classifiers. The SVM trained on DenseNet201 features allowed to obtain the highest classification results in all configurations. The normalization method based on the fast variant of CUT (Contrastive Unpaired Translation), FastCUT, trained with the meta-domain paradigm, allowed to achieve the best classification result for the downstream task and, correspondingly, showed the highest FID on the classification dataset.
引用
收藏
页数:18
相关论文
共 50 条
  • [41] UNPAIRED IMAGE-TO-IMAGE TRANSLATION WITH LIMITED DATA TO REVEAL SUBTLE PHENOTYPES
    Bourou, Anis
    Daupin, Kevin
    Dubreuil, Veronique
    De Thonel, Aurelie
    Mezger-Lallemand, Valerie
    Genovesio, Auguste
    2023 IEEE 20TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING, ISBI, 2023,
  • [42] Exploring Double Cross Cyclic Interpolation in Unpaired Image-to-Image Translation
    Lopez, Jorge
    Mauricio, Antoni
    Camara, Guillermo
    2019 32ND SIBGRAPI CONFERENCE ON GRAPHICS, PATTERNS AND IMAGES (SIBGRAPI), 2019, : 124 - 130
  • [43] Unpaired Image-to-Image Translation Using Negative Learning for Noisy Patches
    Hung, Yu-Hsiang
    Tan, Julianne
    Huang, Tai-Ming
    Hsu, Shang-Che
    Chen, Yi-Ling
    Hua, Kai-Lung
    IEEE MULTIMEDIA, 2022, 29 (04) : 59 - 68
  • [44] Learning Image-to-Image Translation Using Paired and Unpaired Training Samples
    Tripathy, Soumya
    Kannala, Juho
    Rahtu, Esa
    COMPUTER VISION - ACCV 2018, PT II, 2019, 11362 : 51 - 66
  • [45] Spectral normalization and dual contrastive regularization for image-to-image translation
    Zhao, Chen
    Cai, Wei-Ling
    Yuan, Zheng
    VISUAL COMPUTER, 2025, 41 (01): : 129 - 140
  • [46] Mutually Improved Endoscopic Image Synthesis and Landmark Detection in Unpaired Image-to-Image Translation
    Sharan, Lalith
    Romano, Gabriele
    Koehler, Sven
    Kelm, Halvar
    Karck, Matthias
    De Simone, Raffaele
    Engelhardt, Sandy
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2022, 26 (01) : 127 - 138
  • [47] SEMANTIC-AWARE UNPAIRED IMAGE-TO-IMAGE TRANSLATION FOR URBAN SCENE IMAGES
    Li, Zongyao
    Togo, Ren
    Ogawa, Takahiro
    Haseyama, Miki
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 2150 - 2154
  • [48] A comparative evaluation of image-to-image translation methods for stain transfer in histopathology
    Zingman, Igor
    Frayle, Sergio
    Tankoyeu, Ivan
    Sukhanov, Segrey
    Heinemann, Fabian
    MEDICAL IMAGING WITH DEEP LEARNING, VOL 227, 2023, 227 : 1509 - 1525
  • [49] Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks
    Zhu, Jun-Yan
    Park, Taesung
    Isola, Phillip
    Efros, Alexei A.
    2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 2242 - 2251
  • [50] Every Pixel Has Its Moments: Ultra-High-Resolution Unpaired Image-to-Image Translation via Dense Normalization
    Ho, Ming-Yang
    Wu, Che Ming
    Wu, Min-Sheng
    Tseng, Yufeng Jane
    COMPUTER VISION - ECCV 2024, PT XLV, 2025, 15103 : 312 - 328