MULTI-MODAL INFORMATION FUSION FOR CLASSIFICATION OF KIDNEY ABNORMALITIES

被引:1
|
作者
Varsha, S. [1 ]
Nasser, Sahar Almahfouz [1 ]
Bala, Gouranga [1 ]
Kurian, Nikhil Cherian [1 ]
Sethi, Amit [1 ]
机构
[1] Indian Inst Technol, Dept Elect Engn, Mumbai, Maharashtra, India
关键词
Deep Learning; CT Imagery; nnU-Net; Biomedical Image Segmentation;
D O I
10.1109/ISBIC56247.2022.9854644
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Being able to predict the outcome of a treatment has obvious utility in treatment planning. Retrospective studies investigating the correlation of various tumor morphological characteristics to the treatment outcomes are becoming increasingly feasible due to data collection and advances in machine learning. For renal cancers, computed tomography (CT) imaging is a widely used diagnostic modality owing to its highly discernible visible features. However, manual inspection of several CT images are quite labour-intensive and often subjective. To automate this task, we propose an attention-based deep learning framework that automatically analyzes renal tumors by fusing both the clinical and imaging features. We demonstrate its effectiveness on the 2022 Knight challenge.
引用
收藏
页数:4
相关论文
共 50 条
  • [21] An effective multi-modal adaptive contextual feature information fusion method for Chinese long text classification
    Xu, Yangshuyi
    Liu, Guangzhong
    Zhang, Lin
    Shen, Xiang
    Luo, Sizhe
    [J]. ARTIFICIAL INTELLIGENCE REVIEW, 2024, 57 (09)
  • [22] Multi-modal fusion attention sentiment analysis for mixed sentiment classification
    Xue, Zhuanglin
    Xu, Jiabin
    [J]. COGNITIVE COMPUTATION AND SYSTEMS, 2024,
  • [23] AF: An Association-Based Fusion Method for Multi-Modal Classification
    Liang, Xinyan
    Qian, Yuhua
    Guo, Qian
    Cheng, Honghong
    Liang, Jiye
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (12) : 9236 - 9254
  • [24] Exploring Fusion Strategies in Deep Learning Models for Multi-Modal Classification
    Zhang, Duoyi
    Nayak, Richi
    Bashar, Md Abul
    [J]. DATA MINING, AUSDM 2021, 2021, 1504 : 102 - 117
  • [25] QUARC: Quaternion Multi-Modal Fusion Architecture For Hate Speech Classification
    Kumar, Deepak
    Kumar, Nalin
    Mishra, Subhankar
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON BIG DATA AND SMART COMPUTING (BIGCOMP 2021), 2021, : 346 - 349
  • [26] Tile Classification Based Viewport Prediction with Multi-modal Fusion Transformer
    Zhang, Zhihao
    Chen, Yiwei
    Zhang, Weizhan
    Yan, Caixia
    Zheng, Qinghua
    Wang, Qi
    Chen, Wangdu
    [J]. PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 3560 - 3568
  • [27] MULTI-MODAL INFORMATION FUSION FOR NEWS STORY SEGMENTATION IN BROADCAST VIDEO
    Feng, Bailan
    Ding, Peng
    Chen, Jiansong
    Bai, Jinfeng
    Xu, Su
    Xu, Bo
    [J]. 2012 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2012, : 1417 - 1420
  • [28] ART-Based Fusion of Multi-modal Information for Mobile Robots
    Berghoefer, Elmar
    Schulze, Denis
    Tscherepanow, Marko
    Wachsmuth, Sven
    [J]. ENGINEERING APPLICATIONS OF NEURAL NETWORKS, PT I, 2011, 363 : 1 - 10
  • [29] Hierarchical Multi-Modal Prompting Transformer for Multi-Modal Long Document Classification
    Liu, Tengfei
    Hu, Yongli
    Gao, Junbin
    Sun, Yanfeng
    Yin, Baocai
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (07) : 6376 - 6390
  • [30] Soft multi-modal data fusion
    Coppock, S
    Mazack, L
    [J]. PROCEEDINGS OF THE 12TH IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS, VOLS 1 AND 2, 2003, : 636 - 641