The role of unpaired image-to-image translation for stain color normalization in colorectal cancer histology classification

被引:9
|
作者
Altini, Nicola [1 ]
Marvulli, Tommaso Maria [3 ]
Zito, Francesco Alfredo [4 ]
Caputo, Mariapia [5 ]
Tommasi, Stefania [5 ]
Azzariti, Amalia [3 ]
Brunetti, Antonio [1 ,2 ]
Prencipe, Berardino [1 ]
Mattioli, Eliseo [4 ]
De Summa, Simona [5 ]
Bevilacqua, Vitoantonio [1 ,2 ]
机构
[1] Polytech Univ Bari, Dept Elect & Informat Engn DEI, Via Edoardo Orabona 4, I-70126 Bari, Italy
[2] Apulian Bioengn Srl, Via Violette 14, I-70026 Modugno, Italy
[3] IRCCS Ist Tumori Giovanni Paolo II, Lab Expt Pharmacol, Via O Flacco 65, I-70124 Bari, Italy
[4] IRCCS Ist Tumori Giovanni Paolo II, Pathol Dept, Via O Flacco 65, I-70124 Bari, Italy
[5] IRCCS Ist Tumori Giovanni Paolo II, Mol Diagnost & Pharmacogenet Unit, Via O Flacco 65, I-70124 Bari, Italy
关键词
Colorectal cancer; Generative adversarial network; Stain color normalization; Computer-aided diagnosis;
D O I
10.1016/j.cmpb.2023.107511
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Background: Histological assessment of colorectal cancer (CRC) tissue is a crucial and demanding task for pathologists. Unfortunately, manual annotation by trained specialists is a burdensome operation, which suffers from problems like intra-and inter-pathologist variability. Computational models are revolution-izing the Digital Pathology field, offering reliable and fast approaches for challenges like tissue segmen-tation and classification. With this respect, an important obstacle to overcome consists in stain color variations among different laboratories, which can decrease the performance of classifiers. In this work, we investigated the role of Unpaired Image-to-Image Translation (UI2IT) models for stain color normal-ization in CRC histology and compared to classical normalization techniques for Hematoxylin-Eosin (H&E) images. Methods: Five Deep Learning normalization models based on Generative Adversarial Networks (GANs) belonging to the UI2IT paradigm have been thoroughly compared to realize a robust stain color normal-ization pipeline. To avoid the need for training a style transfer GAN between each pair of data domains, in this paper we introduce the concept of training by exploiting a meta-domain, which contains data coming from a wide variety of laboratories. The proposed framework enables a huge saving in terms of training time, by allowing to train a single image normalization model for a target laboratory. To prove the applicability of the proposed workflow in the clinical practice, we conceived a novel perceptive qual-ity measure, which we defined as Pathologist Perceptive Quality (PPQ). The second stage involved the classification of tissue types in CRC histology, where deep features extracted from Convolutional Neural Networks have been exploited to realize a Computer-Aided Diagnosis system based on a Support Vector Machine (SVM). To prove the reliability of the system on new data, an external validation set composed of N = 15,857 tiles has been collected at IRCCS Istituto Tumori "Giovanni Paolo II". Results: The exploitation of a meta-domain consented to train normalization models that allowed achiev-ing better classification results than normalization models explicitly trained on the source domain. PPQ metric has been found correlated to quality of distributions (Frechet Inception Distance - FID) and to similarity of the transformed image to the original one (Learned Perceptual Image Patch Similarity - LPIPS), thus showing that GAN quality measures introduced in natural image processing tasks can be linked to pathologist evaluation of H&E images. Furthermore, FID has been found correlated to accuracies of the downstream classifiers. The SVM trained on DenseNet201 features allowed to obtain the highest classification results in all configurations. The normalization method based on the fast variant of CUT (Contrastive Unpaired Translation), FastCUT, trained with the meta-domain paradigm, allowed to achieve the best classification result for the downstream task and, correspondingly, showed the highest FID on the classification dataset.
引用
收藏
页数:18
相关论文
共 50 条
  • [31] Asynchronous Generative Adversarial Network for Asymmetric Unpaired Image-to-Image Translation
    Zheng, Ziqiang
    Bin, Yi
    Lv, Xiaoou
    Wu, Yang
    Yang, Yang
    Shen, Heng Tao
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 2474 - 2487
  • [32] Unpaired image-to-image translation with improved two-dimensional feature
    Hangyao Tu
    Wanliang Wang
    Jiachen Chen
    Fei Wu
    Guoqing Li
    Multimedia Tools and Applications, 2022, 81 : 43851 - 43872
  • [33] Multi-feature contrastive learning for unpaired image-to-image translation
    Gou, Yao
    Li, Min
    Song, Yu
    He, Yujie
    Wang, Litao
    COMPLEX & INTELLIGENT SYSTEMS, 2023, 9 (04) : 4111 - 4122
  • [34] GAN-based unpaired image-to-image translation for maritime imagery
    Mediavilla, Chelsea
    Sato, Jonathan
    Manzanares, Mitch
    Dotter, Marissa
    Parameswaran, Shibin
    GEOSPATIAL INFORMATICS X, 2020, 11398
  • [35] Unpaired image-to-image translation with improved two-dimensional feature
    Tu, Hangyao
    Wang, Wanliang
    Chen, Jiachen
    Wu, Fei
    Li, Guoqing
    MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (30) : 43851 - 43872
  • [36] Enhanced Unpaired Image-to-Image Translation via Transformation in Saliency Domain
    Shibasaki, Kei
    Ikehara, Masaaki
    IEEE ACCESS, 2023, 11 : 137495 - 137505
  • [37] Trans-Cycle: Unpaired Image-to-Image Translation Network by Transformer
    Tian, Kai
    Pan, Mengze
    Lu, Zongqing
    Liao, Qingmin
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT VI, 2023, 14259 : 576 - 587
  • [38] Domain Bridge for Unpaired Image-to-Image Translation and Unsupervised Domain Adaptation
    Pizzati, Fabio
    de Charette, Raoul
    Zaccaria, Michela
    Cerri, Pietro
    2020 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2020, : 2979 - 2987
  • [39] UNPAIRED IMAGE-TO-IMAGE TRANSLATION BASED DOMAIN ADAPTATION FOR POLYP SEGMENTATION
    Xiong, Xinyu
    Li, Siying
    Li, Guanbin
    2023 IEEE 20TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING, ISBI, 2023,
  • [40] Background-focused contrastive learning for unpaired image-to-image translation
    Shao, Mingwen
    Han, Minggui
    Meng, Lingzhuang
    Liu, Fukang
    JOURNAL OF ELECTRONIC IMAGING, 2024, 33 (04)