Two-stage multi-modal MR images fusion method based on Parametric Logarithmic Image Processing (PLIP) Model

被引:5
|
作者
Bhateja, Vikrant [1 ,2 ]
Nigam, Mansi [1 ,3 ]
Bhadauria, Anuj Singh [1 ,4 ]
Arya, Anu [1 ,5 ]
机构
[1] Shri Ramswaroop Mem Grp Profess Coll SRMGPC, Dept Elect & Commun Engn, Faizabad Rd, Lucknow 226028, Uttar Pradesh, India
[2] Dr APJ Abdul Kalam Tech Univ, Lucknow 226031, Uttar Pradesh, India
[3] Robert Bosch Engn & Business Solut Private Ltd, Bangalore 560030, Karnataka, India
[4] TATA Consultancy Serv Ltd, Lucknow 226010, Uttar Pradesh, India
[5] Robert Bosch Engn & Business Solut Private Ltd, Near CHI SEZ IT Pk, Coimbatore 641035, Tamil Nadu, India
关键词
MRI; HVS; PLIP; CONTOURLET TRANSFORM; WAVELET; FRAMEWORK;
D O I
10.1016/j.patrec.2020.05.027
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
MRI is one of the most compliant technique that is used for the screening of Brain Tumor. MRI can be acquired in four available modalities which are MR-T1, MR-T2, MR-PD and MR-Gad; among these MR-T2 comprises of most of the detailed information of the tumorous tissues. However, the accuracy and reliability of the diagnosis may be affected due to lack of sufficient details in each modality (as different MRI modalities highlight different set of tissues). Therefore, MR Image(s) fusion is essential to obtain a more illustrative image containing the requisite complementary details of each modality. For this purpose, multi-modal fusion of MR-T2 with MR-T1, MR-PD and MR-Gad have been dealt in this work using the proposed fusion method. This paper presents a two-stage fusion method using Stationary Wavelet Transform (SWT) in combination with Parameterized Logarithmic Image Processing (PLIP) model. At Stage-I of sub-band decomposition: the first level SWT coefficients contain large amount of noise thus suppressing the necessary edge information. This aspect has been resolved at Stage-II by employing second level SWT decomposition along with Principal Component Analysis (PCA). The fusion coefficients from both the stages are finally fused using PLIP operators (prior to reconstruction). The obtained results are compared qualitatively as well as quantitatively using fusion metrics like Entropy, Fusion Factor, Standard Deviation and Edge Strength. Noteworthy visual response is obtained with PLIP fusion model in coherence with Human Visual System (HVS) characteristics. (C) 2020 Elsevier B.V. All rights reserved.
引用
收藏
页码:25 / 30
页数:6
相关论文
共 50 条
  • [31] Visual Sorting Method Based on Multi-Modal Information Fusion
    Han, Song
    Liu, Xiaoping
    Wang, Gang
    APPLIED SCIENCES-BASEL, 2022, 12 (06):
  • [32] Multi-modal Perception Fusion Method Based on Cross Attention
    Zhang B.-L.
    Pan Z.-H.
    Jiang J.-Z.
    Zhang C.-B.
    Wang Y.-X.
    Yang C.-L.
    Zhongguo Gonglu Xuebao/China Journal of Highway and Transport, 2024, 37 (03): : 181 - 193
  • [33] Evaluation Method of Teaching Styles Based on Multi-modal Fusion
    Tang, Wen
    Wang, Chongwen
    Zhang, Yi
    2021 THE 7TH INTERNATIONAL CONFERENCE ON COMMUNICATION AND INFORMATION PROCESSING, ICCIP 2021, 2021, : 9 - 15
  • [34] A Collaborative Anomaly Localization Method Based on Multi-Modal Images
    Li, Yuanhang
    Yao, Junfeng
    Chen, Kai
    Zhang, Han
    Sun, Xiaodong
    Qian, Quan
    Wu, Xing
    PROCEEDINGS OF THE 2024 27 TH INTERNATIONAL CONFERENCE ON COMPUTER SUPPORTED COOPERATIVE WORK IN DESIGN, CSCWD 2024, 2024, : 1322 - 1327
  • [35] A Two-Stage Spatiotemporal Fusion Method for Remote Sensing Images
    Sun, Yue
    Zhang, Hua
    PHOTOGRAMMETRIC ENGINEERING AND REMOTE SENSING, 2019, 85 (12): : 907 - 914
  • [36] Hyperspectral Image Fusion Algorithm Based on Two-Stage Spectral Unmixing Method
    Choi, Jae Wan
    Kim, Dae Sung
    Lee, Byoung Kil
    Kim, Yong Il
    Yu, Ki Yun
    KOREAN JOURNAL OF REMOTE SENSING, 2006, 22 (04) : 295 - 304
  • [37] Multi-modal image fusion based on saliency guided in NSCT domain
    Wang, Shiying
    Shen, Yan
    IET IMAGE PROCESSING, 2020, 14 (13) : 3188 - 3201
  • [38] Leveraging multi-modal fusion for graph-based image annotation
    Amiri, S. Hamid
    Jamzad, Mansour
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2018, 55 : 816 - 828
  • [39] Multi-Modal Image Fusion Based on Matrix Product State of Tensor
    Lu, Yixiang
    Wang, Rui
    Gao, Qingwei
    Sun, Dong
    Zhu, De
    FRONTIERS IN NEUROROBOTICS, 2021, 15
  • [40] IMAGE DESCRIPTION THROUGH FUSION BASED RECURRENT MULTI-MODAL LEARNING
    Oruganti, Ram Manohar
    Sah, Shagan
    Pillai, Suhas
    Ptucha, Raymond
    2016 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2016, : 3613 - 3617