Cross-Task Attention Network: Improving Multi-task Learning for Medical Imaging Applications

被引:0
|
作者
Kim, Sangwook [1 ]
Purdie, Thomas G. [1 ,2 ,4 ,8 ]
McIntosh, Chris [1 ,2 ,3 ,5 ,6 ,7 ]
机构
[1] Univ Toronto, Dept Med Biophys, Toronto, ON, Canada
[2] Univ Hlth Network, Princess Margaret Canc Ctr, Toronto, ON, Canada
[3] Univ Hlth Network, Toronto Gen Res Inst, Toronto, ON, Canada
[4] Univ Hlth Network, Princess Margaret Res Inst, Toronto, ON, Canada
[5] Univ Hlth Network, Peter Munk Cardiac Ctr, Toronto, ON, Canada
[6] Univ Toronto, Dept Med Imaging, Toronto, ON, Canada
[7] Vector Inst, Toronto, ON, Canada
[8] Univ Toronto, Dept Radiat Oncol, Toronto, ON, Canada
关键词
Multi-Task Learning; Cross Attention; Automated Radiotherapy;
D O I
10.1007/978-3-031-47401-9_12
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multi-task learning (MTL) is a powerful approach in deep learning that leverages the information from multiple tasks during training to improve model performance. In medical imaging, MTL has shown great potential to solve various tasks. However, existing MTL architectures in medical imaging are limited in sharing information across tasks, reducing the potential performance improvements of MTL. In this study, we introduce a novel attention-based MTL framework to better leverage inter-task interactions for various tasks from pixel-level to image-level predictions. Specifically, we propose a Cross-Task Attention Network (CTAN) which utilizes cross-task attention mechanisms to incorporate information by interacting across tasks. We validated CTAN on four medical imaging datasets that span different domains and tasks including: radiation treatment planning prediction using planning CT images of two different target cancers (Prostate, OpenKBP); pigmented skin lesion segmentation and diagnosis using dermatoscopic images (HAM10000); and COVID-19 diagnosis and severity prediction using chest CT scans (STOIC). Our study demonstrates the effectiveness of CTAN in improving the accuracy of medical imaging tasks. Compared to standard single-task learning (STL), CTAN demonstrated a 4.67% improvement in performance and outperformed both widely used MTL baselines: hard parameter sharing (HPS) with an average performance improvement of 3.22%; and multi-task attention network (MTAN) with a relative decrease of 5.38%. These findings highlight the significance of our proposed MTL framework in solving medical imaging tasks and its potential to improve their accuracy across domains.
引用
收藏
页码:119 / 128
页数:10
相关论文
共 50 条
  • [1] Cross-task Attention Mechanism for Dense Multi-task Learning
    Lopes, Ivan
    Tuan-Hung Vu
    de Charette, Raoul
    2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, : 2328 - 2337
  • [2] MTFormer: Multi-task Learning via Transformer and Cross-Task Reasoning
    Xu, Xiaogang
    Zhao, Hengshuang
    Vineet, Vibhav
    Lim, Ser-Nam
    Torralba, Antonio
    COMPUTER VISION - ECCV 2022, PT XXVII, 2022, 13687 : 304 - 321
  • [3] Cross-Task Knowledge Distillation in Multi-Task Recommendation
    Yang, Chenxiao
    Pan, Junwei
    Gao, Xiaofeng
    Jiang, Tingyu
    Liu, Dapeng
    Chen, Guihai
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 4318 - 4326
  • [4] Multi-task learning with cross-task consistency for improved depth estimation in colonoscopy
    Chavarrias Solano, Pedro Esteban
    Bulpitt, Andrew
    Subramanian, Venkataraman
    Ali, Sharib
    Medical Image Analysis, 2025, 99
  • [5] Cross-task feature enhancement strategy in multi-task learning for harvesting Sichuan pepper
    Wang, Yihan
    Deng, Xinglong
    Luo, Jianqiao
    Li, Bailin
    Xiao, Shide
    COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2023, 207
  • [6] One-Pass Multi-Task Networks With Cross-Task Guided Attention for Brain Tumor Segmentation
    Zhou, Chenhong
    Ding, Changxing
    Wang, Xinchao
    Lu, Zhentai
    Tao, Dacheng
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 : 4516 - 4529
  • [7] Learning Cross-Task Attribute - Attribute Similarity for Multi-task Attribute-Value Extraction
    Jain, Mayank
    Bhattacharya, Sourangshu
    Jain, Harshit
    Shaik, Karimulla
    Chelliah, Muthusamy
    ECNLP 4: THE FOURTH WORKSHOP ON E-COMMERCE AND NLP, 2021, : 79 - 87
  • [8] SEQUENTIAL CROSS ATTENTION BASED MULTI-TASK LEARNING
    Kim, Sunkyung
    Choi, Hyesong
    Min, Dongbo
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 2311 - 2315
  • [9] MULTI-TASK LEARNING WITH CROSS ATTENTION FOR KEYWORD SPOTTING
    Higuchil, Takuya
    Gupta, Anmol
    Dhir, Chandra
    2021 IEEE AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING WORKSHOP (ASRU), 2021, : 571 - 578
  • [10] Multiple Relational Attention Network for Multi-task Learning
    Zhao, Jiejie
    Du, Bowen
    Sun, Leilei
    Zhuang, Fuzhen
    Lv, Weifeng
    Xiong, Hui
    KDD'19: PROCEEDINGS OF THE 25TH ACM SIGKDD INTERNATIONAL CONFERENCCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2019, : 1123 - 1131