RGB-T image analysis technology and application: A survey

被引:24
|
作者
Song, Kechen [1 ,2 ,3 ]
Zhao, Ying [1 ,2 ,3 ]
Huang, Liming [1 ,2 ,3 ]
Yan, Yunhui [1 ,2 ,3 ]
Meng, Qinggang [4 ]
机构
[1] Northeastern Univ, Sch Mech Engn & Automat, Shenyang 110819, Liaoning, Peoples R China
[2] Northeastern Univ, Natl Frontiers Sci Ctr Ind Intelligence & Syst Opt, Shenyang 110819, Peoples R China
[3] Minist Educ, Key Lab Data Analyt & Optimizat Smart Ind, Shenyang 110819, Liaoning, Peoples R China
[4] Loughborough Univ, Dept Comp Sci, Loughborough LE11 3TU, England
基金
中国国家自然科学基金;
关键词
RGB-T images; Visible-thermal; Image fusion; Salient object detection; Pedestrian detection; Object tracking; Person re-identification; MODALITY PERSON REIDENTIFICATION; GENERATIVE ADVERSARIAL NETWORK; FUSION NETWORK; SEMANTIC SEGMENTATION; PEDESTRIAN DETECTION; SALIENCY DETECTION; ATTENTION NETWORK; SENSOR FUSION; FRAMEWORK; CONSISTENT;
D O I
10.1016/j.engappai.2023.105919
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
RGB-Thermal infrared (RGB-T) image analysis has been actively studied in recent years. In the past decade, it has received wide attention and made a lot of important research progress in many applications. This paper provides a comprehensive review of RGB-T image analysis technology and application, including several hot fields: image fusion, salient object detection, semantic segmentation, pedestrian detection, object tracking, and person re-identification. The first two belong to the preprocessing technology for many computer vision tasks, and the rest belong to the application direction. This paper extensively reviews 400+ papers spanning more than 10 different application tasks. Furthermore, for each specific task, this paper comprehensively analyzes the various methods and presents the performance of the state-of-the-art methods. This paper also makes an in-deep analysis of challenges for RGB-T image analysis as well as some potential technical improvements in the future.
引用
收藏
页数:36
相关论文
共 50 条
  • [41] RGB-T Saliency Detection Based on Multiscale Modal Reasoning Interaction
    Wu, Yunhe
    Jia, Tong
    Chang, Xingya
    Wang, Hao
    Chen, Dongyue
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2024, 73
  • [42] Modal complementary fusion network for RGB-T salient object detection
    Ma, Shuai
    Song, Kechen
    Dong, Hongwen
    Tian, Hongkun
    Yan, Yunhui
    APPLIED INTELLIGENCE, 2023, 53 (08) : 9038 - 9055
  • [43] MiLNet: Multiplex Interactive Learning Network for RGB-T Semantic Segmentation
    Liu, Jinfu
    Liu, Hong
    Li, Xia
    Ren, Jiale
    Xu, Xinhua
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2025, 34 : 1686 - 1699
  • [44] 基于CNN特征的RGB-T目标跟踪算法
    刘莲
    李福生
    计算机与数字工程, 2024, (02) : 432 - 435
  • [45] Context-Aware Interaction Network for RGB-T Semantic Segmentation
    Lv, Ying
    Liu, Zhi
    Li, Gongyang
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 6348 - 6360
  • [46] Siamese infrared and visible light fusion network for RGB-T tracking
    Jingchao Peng
    Haitao Zhao
    Zhengwei Hu
    Yi Zhuang
    Bofan Wang
    International Journal of Machine Learning and Cybernetics, 2023, 14 : 3281 - 3293
  • [47] AMNet: Learning to Align Multi-Modality for RGB-T Tracking
    Zhang, Tianlu
    He, Xiaoyi
    Jiao, Qiang
    Zhang, Qiang
    Han, Jungong
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (08) : 7386 - 7400
  • [48] PSNet: Parallel symmetric network for RGB-T salient object detection
    Bi, Hongbo
    Wu, Ranwan
    Liu, Ziqi
    Zhang, Jiayuan
    Zhang, Cong
    Xiang, Tian-Zhu
    Wang, Xiufang
    NEUROCOMPUTING, 2022, 511 (410-425) : 410 - 425
  • [49] ROBUST RGB-T TRACKING VIA CONSISTENCY REGULATED SCENE PERCEPTION
    Kang, Bin
    Liu, Liwei
    Zhao, Shihao
    Du, Songlin
    2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2023, : 510 - 514
  • [50] Online Learning Samples and Adaptive Recovery for Robust RGB-T Tracking
    Liu, Jun
    Luo, Zhongqiang
    Xiong, Xingzhong
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (02) : 724 - 737