Two-stage multi-modal MR images fusion method based on Parametric Logarithmic Image Processing (PLIP) Model

被引:5
|
作者
Bhateja, Vikrant [1 ,2 ]
Nigam, Mansi [1 ,3 ]
Bhadauria, Anuj Singh [1 ,4 ]
Arya, Anu [1 ,5 ]
机构
[1] Shri Ramswaroop Mem Grp Profess Coll SRMGPC, Dept Elect & Commun Engn, Faizabad Rd, Lucknow 226028, Uttar Pradesh, India
[2] Dr APJ Abdul Kalam Tech Univ, Lucknow 226031, Uttar Pradesh, India
[3] Robert Bosch Engn & Business Solut Private Ltd, Bangalore 560030, Karnataka, India
[4] TATA Consultancy Serv Ltd, Lucknow 226010, Uttar Pradesh, India
[5] Robert Bosch Engn & Business Solut Private Ltd, Near CHI SEZ IT Pk, Coimbatore 641035, Tamil Nadu, India
关键词
MRI; HVS; PLIP; CONTOURLET TRANSFORM; WAVELET; FRAMEWORK;
D O I
10.1016/j.patrec.2020.05.027
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
MRI is one of the most compliant technique that is used for the screening of Brain Tumor. MRI can be acquired in four available modalities which are MR-T1, MR-T2, MR-PD and MR-Gad; among these MR-T2 comprises of most of the detailed information of the tumorous tissues. However, the accuracy and reliability of the diagnosis may be affected due to lack of sufficient details in each modality (as different MRI modalities highlight different set of tissues). Therefore, MR Image(s) fusion is essential to obtain a more illustrative image containing the requisite complementary details of each modality. For this purpose, multi-modal fusion of MR-T2 with MR-T1, MR-PD and MR-Gad have been dealt in this work using the proposed fusion method. This paper presents a two-stage fusion method using Stationary Wavelet Transform (SWT) in combination with Parameterized Logarithmic Image Processing (PLIP) model. At Stage-I of sub-band decomposition: the first level SWT coefficients contain large amount of noise thus suppressing the necessary edge information. This aspect has been resolved at Stage-II by employing second level SWT decomposition along with Principal Component Analysis (PCA). The fusion coefficients from both the stages are finally fused using PLIP operators (prior to reconstruction). The obtained results are compared qualitatively as well as quantitatively using fusion metrics like Entropy, Fusion Factor, Standard Deviation and Edge Strength. Noteworthy visual response is obtained with PLIP fusion model in coherence with Human Visual System (HVS) characteristics. (C) 2020 Elsevier B.V. All rights reserved.
引用
收藏
页码:25 / 30
页数:6
相关论文
共 50 条
  • [1] A Two-Stage Attention Based Modality Fusion Framework for Multi-Modal Speech Emotion Recognition
    Hu, Dongni
    Chen, Chengxin
    Zhang, Pengyuan
    Li, Junfeng
    Yan, Yonghong
    Zhao, Qingwei
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2021, E104D (08) : 1391 - 1394
  • [2] TAG-fusion: Two-stage attention guided multi-modal fusion network for semantic segmentation
    Zhang, Zhizhou
    Wang, Wenwu
    Zhu, Lei
    Tang, Zhibin
    DIGITAL SIGNAL PROCESSING, 2025, 156
  • [3] MFHOD: Multi-modal image fusion method based on the higher-order degradation model
    Guo, Jinxin
    Zhan, Weida
    Jiang, Yichun
    Ge, Wei
    Chen, Yu
    Xu, Xiaoyu
    Li, Jin
    Liu, Yanyan
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 249
  • [4] Non-rigid multi-modal brain image registration based on two-stage generative adversarial nets
    Zhu, Xingxing
    Huang, Zhiwen
    Ding, Mingyue
    Zhang, Xuming
    NEUROCOMPUTING, 2022, 505 : 44 - 57
  • [5] A novel method for multi-modal fusion based image embedding and compression technique using CT/PET images
    Saranya, G.
    Devi, Nirmala S.
    BIOMEDICAL RESEARCH-INDIA, 2017, 28 (06): : 2796 - 2800
  • [6] A Two-Stage Image Segmentation Model for Multi-Channel Images
    Li, Zhi
    Zeng, Tieyong
    COMMUNICATIONS IN COMPUTATIONAL PHYSICS, 2016, 19 (04) : 904 - 926
  • [7] Adaptive decomposition method for multi-modal medical image fusion
    Wang, Jing
    Li, Xiongfei
    Zhang, Yan
    Zhang, Xiaoli
    IET IMAGE PROCESSING, 2018, 12 (08) : 1403 - 1412
  • [8] A multi-modal and multi-stage fusion enhancement network for segmentation based on OCT and OCTA images
    Quan, Xiongwen
    Hou, Guangyao
    Yin, Wenya
    Zhang, Han
    INFORMATION FUSION, 2025, 113
  • [9] A Multi-modal Medical Image Fusion Method in Spatial Domain
    Yan, Huibin
    Li, Zhongmin
    PROCEEDINGS OF 2019 IEEE 3RD INFORMATION TECHNOLOGY, NETWORKING, ELECTRONIC AND AUTOMATION CONTROL CONFERENCE (ITNEC 2019), 2019, : 597 - 601
  • [10] Bandelet-Based Image Fusion: A Comparative Study for Multi-Focus, Multi-Modal Images
    Giansiracusa, Michael
    Lutz, Adam
    Messer, Neal
    Ezekiel, Soundararajan
    Blasch, Erik
    Alford, Mark
    GEOSPATIAL INFORMATICS, FUSION, AND MOTION VIDEO ANALYTICS VI, 2016, 9841