Depth-aware lightweight network for RGB-D salient object detection

被引:2
|
作者
Ling, Liuyi [1 ,2 ]
Wang, Yiwen [1 ,3 ]
Wang, Chengjun [1 ]
Xu, Shanyong [2 ]
Huang, Yourui [2 ]
机构
[1] Anhui Univ Sci & Technol, Sch Artificial Intelligence, Huainan, Peoples R China
[2] Anhui Univ Sci & Technol, Sch Elect & Informat Technol, Huainan, Peoples R China
[3] Anhui Univ Sci & Technol, Sch Artificial Intelligence, Huainan 232001, Peoples R China
关键词
depth-aware; lightweight; RGB-D salient object detection;
D O I
10.1049/ipr2.12796
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
RGB-D salient object detection (SOD) is to detect salient objects from one RGB image and its depth data. Although related networks have achieved appreciable performance, they are not ideal for mobile devices since they are cumbersome and time-consuming. The existing lightweight networks for RGB-D SOD use depth information as additional input, and integrate depth information with colour image, which achieve impressive performance. However, the quality of depth information is uneven and the acquisition cost is high. To solve this issue, depth-aware strategy is first combined to propose a lightweight SOD model, Depth-Aware Lightweight network (DAL), using only RGB maps as input, which is applied to mobile devices. The DAL's framework is composed of multi-level feature extraction branch, specially designed channel fusion module (CF) to perceive the depth information, and multi-modal fusion module (MMF) to fuse the information of multi-modal feature maps. The proposed DAL is evaluated on five datasets and it is compared with 14 models. Experimental results demonstrate that the proposed DAL outperforms the state-of-the-art lightweight networks. The proposed DAL has only 5.6 M parameters and inference speed of 39 ms. Compared with the best-performing lightweight method, the proposed DAL has fewer parameters, faster inference speed, and higher accuracy.
引用
收藏
页码:2350 / 2361
页数:12
相关论文
共 50 条
  • [21] Depth quality-aware selective saliency fusion for RGB-D image salient object detection
    Wang, Xuehao
    Li, Shuai
    Chen, Chenglizhao
    Hao, Aimin
    Qin, Hong
    [J]. NEUROCOMPUTING, 2021, 432 : 44 - 56
  • [22] Guided residual network for RGB-D salient object detection with efficient depth feature learning
    Wang, Jian
    Chen, Shuhan
    Lv, Xiao
    Xu, Xiuqi
    Hu, Xuelong
    [J]. VISUAL COMPUTER, 2022, 38 (05): : 1803 - 1814
  • [23] Depth Enhanced Cross-Modal Cascaded Network for RGB-D Salient Object Detection
    Zhao, Zhengyun
    Huang, Ziqing
    Chai, Xiuli
    Wang, Jun
    [J]. NEURAL PROCESSING LETTERS, 2023, 55 (01) : 361 - 384
  • [24] Depth quality-aware selective saliency fusion for RGB-D image salient object detection
    Wang, Xuehao
    Li, Shuai
    Chen, Chenglizhao
    Hao, Aimin
    Qin, Hong
    [J]. Neurocomputing, 2021, 432 : 44 - 56
  • [25] SiaTrans: Siamese transformer network for RGB-D salient object detection with depth image classification
    Jia, XingZhao
    DongYe, ChangLei
    Peng, YanJun
    [J]. IMAGE AND VISION COMPUTING, 2022, 127
  • [26] Guided residual network for RGB-D salient object detection with efficient depth feature learning
    Jian Wang
    Shuhan Chen
    Xiao Lv
    Xiuqi Xu
    Xuelong Hu
    [J]. The Visual Computer, 2022, 38 : 1803 - 1814
  • [27] Depth Enhanced Cross-Modal Cascaded Network for RGB-D Salient Object Detection
    Zhengyun Zhao
    Ziqing Huang
    Xiuli Chai
    Jun Wang
    [J]. Neural Processing Letters, 2023, 55 : 361 - 384
  • [28] RGB-D salient object detection: A survey
    Tao Zhou
    Deng-Ping Fan
    Ming-Ming Cheng
    Jianbing Shen
    Ling Shao
    [J]. Computational Visual Media, 2021, 7 (01) : 37 - 69
  • [29] RGB-D salient object detection: A survey
    Zhou, Tao
    Fan, Deng-Ping
    Cheng, Ming-Ming
    Shen, Jianbing
    Shao, Ling
    [J]. COMPUTATIONAL VISUAL MEDIA, 2021, 7 (01) : 37 - 69
  • [30] RGB-D salient object detection: A survey
    Tao Zhou
    Deng-Ping Fan
    Ming-Ming Cheng
    Jianbing Shen
    Ling Shao
    [J]. Computational Visual Media, 2021, 7 : 37 - 69