TANet: Transformer-based asymmetric network for RGB-D salient object detection

被引:6
|
作者
Liu, Chang [1 ]
Yang, Gang [1 ,3 ]
Wang, Shuo [1 ]
Wang, Hangxu [1 ,2 ]
Zhang, Yunhua [1 ]
Wang, Yutao [1 ]
机构
[1] Northeastern Univ, Shenyang, Liaoning, Peoples R China
[2] DUT Artificial Intelligence Inst, Dalian, Peoples R China
[3] Northeastern Univ, Wenhua Rd, Shenyang 110000, Liaoning, Peoples R China
基金
中国国家自然科学基金;
关键词
computer vision; image segmentation; object detection; REGION;
D O I
10.1049/cvi2.12177
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Existing RGB-D salient object detection methods mainly rely on a symmetric two-stream Convolutional Neural Network (CNN)-based network to extract RGB and depth channel features separately. However, there are two problems with the symmetric conventional network structure: first, the ability of CNN in learning global contexts is limited; second, the symmetric two-stream structure ignores the inherent differences between modalities. In this study, a Transformer-based asymmetric network is proposed to tackle the issues mentioned above. The authors employ the powerful feature extraction capability of Transformer to extract global semantic information from RGB data and design a lightweight CNN backbone to extract spatial structure information from depth data without pre-training. The asymmetric hybrid encoder effectively reduces the number of parameters in the model while increasing speed without sacrificing performance. Then, a cross-modal feature fusion module which enhances and fuses RGB and depth features with each other is designed. Finally, the authors add edge prediction as an auxiliary task and propose an edge enhancement module to generate sharper contours. Extensive experiments demonstrate that our method achieves superior performance over 14 state-of-the-art RGB-D methods on six public datasets. The code of the authors will be released at .
引用
收藏
页码:415 / 430
页数:16
相关论文
共 50 条
  • [1] Transformer-based difference fusion network for RGB-D salient object detection
    Cui, Zhi-Qiang
    Wang, Feng
    Feng, Zheng-Yong
    JOURNAL OF ELECTRONIC IMAGING, 2022, 31 (06)
  • [2] Swin Transformer-Based Edge Guidance Network for RGB-D Salient Object Detection
    Wang, Shuaihui
    Jiang, Fengyi
    Xu, Boqian
    SENSORS, 2023, 23 (21)
  • [3] GroupTransNet: Group transformer network for RGB-D salient object detection
    Fang, Xian
    Jiang, Mingfeng
    Zhu, Jinchao
    Shao, Xiuli
    Wang, Hongpeng
    NEUROCOMPUTING, 2024, 594
  • [4] Asymmetric deep interaction network for RGB-D salient object detection
    Wang, Feifei
    Li, Yongming
    Wang, Liejun
    Zheng, Panpan
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 266
  • [5] TriTransNet: RGB-D Salient Object Detection with a Triplet Transformer Embedding Network
    Liu, Zhengyi
    Wang, Yuan
    Tu, Zhengzheng
    Xiao, Yun
    Tang, Bin
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 4481 - 4490
  • [6] CATNet: A Cascaded and Aggregated Transformer Network for RGB-D Salient Object Detection
    Sun, Fuming
    Ren, Peng
    Yin, Bowen
    Wang, Fasheng
    Li, Haojie
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 2249 - 2262
  • [7] Dual Swin-transformer based mutual interactive network for RGB-D salient object detection
    Zeng, Chao
    Kwong, Sam
    Ip, Horace
    NEUROCOMPUTING, 2023, 559
  • [8] Asymmetric cross-modality interaction network for RGB-D salient object detection
    Su, Yiming
    Gao, Haoran
    Wang, Mengyin
    Wang, Fasheng
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 275
  • [9] MULTI-MODAL TRANSFORMER FOR RGB-D SALIENT OBJECT DETECTION
    Song, Peipei
    Zhang, Jing
    Koniusz, Piotr
    Barnes, Nick
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 2466 - 2470
  • [10] AirSOD: A Lightweight Network for RGB-D Salient Object Detection
    Zeng, Zhihong
    Liu, Haijun
    Chen, Fenglei
    Tan, Xiaoheng
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (03) : 1656 - 1669