Depth-aware lightweight network for RGB-D salient object detection

被引:2
|
作者
Ling, Liuyi [1 ,2 ]
Wang, Yiwen [1 ,3 ]
Wang, Chengjun [1 ]
Xu, Shanyong [2 ]
Huang, Yourui [2 ]
机构
[1] Anhui Univ Sci & Technol, Sch Artificial Intelligence, Huainan, Peoples R China
[2] Anhui Univ Sci & Technol, Sch Elect & Informat Technol, Huainan, Peoples R China
[3] Anhui Univ Sci & Technol, Sch Artificial Intelligence, Huainan 232001, Peoples R China
关键词
depth-aware; lightweight; RGB-D salient object detection;
D O I
10.1049/ipr2.12796
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
RGB-D salient object detection (SOD) is to detect salient objects from one RGB image and its depth data. Although related networks have achieved appreciable performance, they are not ideal for mobile devices since they are cumbersome and time-consuming. The existing lightweight networks for RGB-D SOD use depth information as additional input, and integrate depth information with colour image, which achieve impressive performance. However, the quality of depth information is uneven and the acquisition cost is high. To solve this issue, depth-aware strategy is first combined to propose a lightweight SOD model, Depth-Aware Lightweight network (DAL), using only RGB maps as input, which is applied to mobile devices. The DAL's framework is composed of multi-level feature extraction branch, specially designed channel fusion module (CF) to perceive the depth information, and multi-modal fusion module (MMF) to fuse the information of multi-modal feature maps. The proposed DAL is evaluated on five datasets and it is compared with 14 models. Experimental results demonstrate that the proposed DAL outperforms the state-of-the-art lightweight networks. The proposed DAL has only 5.6 M parameters and inference speed of 39 ms. Compared with the best-performing lightweight method, the proposed DAL has fewer parameters, faster inference speed, and higher accuracy.
引用
收藏
页码:2350 / 2361
页数:12
相关论文
共 50 条
  • [41] An adaptive guidance fusion network for RGB-D salient object detection
    Sun, Haodong
    Wang, Yu
    Ma, Xinpeng
    [J]. SIGNAL IMAGE AND VIDEO PROCESSING, 2024, 18 (02) : 1683 - 1693
  • [42] Scale Adaptive Fusion Network for RGB-D Salient Object Detection
    Kong, Yuqiu
    Zheng, Yushuo
    Yao, Cuili
    Liu, Yang
    Wang, He
    [J]. COMPUTER VISION - ACCV 2022, PT III, 2023, 13843 : 608 - 625
  • [43] Salient object detection for RGB-D images by generative adversarial network
    Liu, Zhengyi
    Tang, Jiting
    Xiang, Qian
    Zhao, Peng
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2020, 79 (35-36) : 25403 - 25425
  • [44] An adaptive guidance fusion network for RGB-D salient object detection
    Haodong Sun
    Yu Wang
    Xinpeng Ma
    [J]. Signal, Image and Video Processing, 2024, 18 : 1683 - 1693
  • [45] HiDAnet: RGB-D Salient Object Detection via Hierarchical Depth Awareness
    Wu, Zongwei
    Allibert, Guillaume
    Meriaudeau, Fabrice
    Ma, Chao
    Demonceaux, Cedric
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2023, 32 : 2160 - 2173
  • [46] Salient object detection for RGB-D images by generative adversarial network
    Zhengyi Liu
    Jiting Tang
    Qian Xiang
    Peng Zhao
    [J]. Multimedia Tools and Applications, 2020, 79 : 25403 - 25425
  • [47] Synergizing triple attention with depth quality for RGB-D salient object detection
    Song, Peipei
    Li, Wenyu
    Zhong, Peiyan
    Zhang, Jing
    Konuisz, Piotr
    Duan, Feng
    Barnes, Nick
    [J]. NEUROCOMPUTING, 2024, 589
  • [48] A Novel Edge-Inspired Depth Quality Evaluation Network for RGB-D Salient Object Detection
    Xu, Kun
    Guo, Jichang
    [J]. JOURNAL OF GRID COMPUTING, 2023, 21 (03)
  • [49] A Novel Edge-Inspired Depth Quality Evaluation Network for RGB-D Salient Object Detection
    Kun Xu
    Jichang Guo
    [J]. Journal of Grid Computing, 2023, 21
  • [50] Middle-Level Feature Fusion for Lightweight RGB-D Salient Object Detection
    Huang, Nianchang
    Jiao, Qiang
    Zhang, Qiang
    Han, Jungong
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 : 6621 - 6634