SEQUENTIAL CROSS ATTENTION BASED MULTI-TASK LEARNING

被引:3
|
作者
Kim, Sunkyung [1 ]
Choi, Hyesong [1 ]
Min, Dongbo [1 ]
机构
[1] Ewha Womans Univ, Dept Comp Sci & Engn, Seoul, South Korea
关键词
Multi-task learning; self-attention; cross attention; semantic segmentation; monocular depth estimation;
D O I
10.1109/ICIP46576.2022.9897871
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In multi-task learning (MTL) for visual scene understanding, it is crucial to transfer useful information between multiple tasks with minimal interferences. In this paper, we propose a novel architecture that effectively transfers informative features by applying the attention mechanism to the multi-scale features of the tasks. Since applying the attention module directly to all possible features in terms of scale and task requires a high complexity, we propose to apply the attention module sequentially for the task and scale. The cross-task attention module (CTAM) is first applied to facilitate the exchange of relevant information between the multiple task features of the same scale. The cross-scale attention module (CSAM) then aggregates useful information from feature maps at different resolutions in the same task. Also, we attempt to capture long range dependencies through the self-attention module in the feature extraction network. Extensive experiments demonstrate that our method achieves state-of-the-art performance on the NYUD-v2 and PASCAL-Context dataset. Our code is available at https://github.com/kimsunkyung/SCA-MTL
引用
收藏
页码:2311 / 2315
页数:5
相关论文
共 50 条
  • [1] MULTI-TASK LEARNING WITH CROSS ATTENTION FOR KEYWORD SPOTTING
    Higuchil, Takuya
    Gupta, Anmol
    Dhir, Chandra
    [J]. 2021 IEEE AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING WORKSHOP (ASRU), 2021, : 571 - 578
  • [2] Cross-task Attention Mechanism for Dense Multi-task Learning
    Lopes, Ivan
    Tuan-Hung Vu
    de Charette, Raoul
    [J]. 2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, : 2328 - 2337
  • [3] Attention-Based Multi-Task Learning in Pharmacovigilance
    Zhang, Shinan
    Dev, Shantanu
    Voyles, Joseph
    Rao, Anand S.
    [J]. PROCEEDINGS 2018 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE (BIBM), 2018, : 2324 - 2328
  • [4] Biomedical Argument Mining Based on Sequential Multi-Task Learning
    Si, Jiasheng
    Sun, Liu
    Zhou, Deyu
    Ren, Jie
    Li, Lin
    [J]. IEEE-ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS, 2023, 20 (02) : 864 - 874
  • [5] Attention-based Multi-task Learning for Sensor Analytics
    Chen, Yujing
    Rangwala, Huzefa
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2019, : 2187 - 2196
  • [6] Cross-Task Attention Network: Improving Multi-task Learning for Medical Imaging Applications
    Kim, Sangwook
    Purdie, Thomas G.
    McIntosh, Chris
    [J]. MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2023 WORKSHOPS, 2023, 14393 : 119 - 128
  • [7] Multiple object tracking based on multi-task learning with strip attention
    Song, Yaoye
    Zhang, Peng
    Huang, Wei
    Zha, Yufei
    You, Tao
    Zhang, Yanning
    [J]. IET IMAGE PROCESSING, 2021, 15 (14) : 3661 - 3673
  • [8] Multi-Task Reinforcement Learning With Attention-Based Mixture of Experts
    Cheng, Guangran
    Dong, Lu
    Cai, Wenzhe
    Sun, Changyin
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (06) : 3811 - 3818
  • [9] Emotion Recognition With Sequential Multi-task Learning Technique
    Phan Tran Dac Thinh
    Hoang Manh Hung
    Yang, Hyung-Jeong
    Kim, Soo-Hyung
    Lee, Guee-Sang
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2021), 2021, : 3586 - 3589
  • [10] Multiple Relational Attention Network for Multi-task Learning
    Zhao, Jiejie
    Du, Bowen
    Sun, Leilei
    Zhuang, Fuzhen
    Lv, Weifeng
    Xiong, Hui
    [J]. KDD'19: PROCEEDINGS OF THE 25TH ACM SIGKDD INTERNATIONAL CONFERENCCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2019, : 1123 - 1131