Adaptive Visual-Depth Fusion Transfer

被引:1
|
作者
Cai, Ziyun [1 ]
Long, Yang [2 ]
Jing, Xiao-Yuan [1 ]
Shao, Ling [3 ]
机构
[1] Nanjing Univ Posts & Telecommun, Coll Automat, Nanjing, Jiangsu, Peoples R China
[2] Univ Newcastle, Sch Comp, Open Lab, Newcastle Upon Tyne NE4 5TG, Tyne & Wear, England
[3] Inception Inst Artificial Intelligence, Abu Dhabi, U Arab Emirates
来源
关键词
RGB-D data; Domain adaptation; Visual categorization; DOMAIN; KERNEL;
D O I
10.1007/978-3-030-20870-7_4
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
While RGB-D classification task has been actively researched in recent years, most existing methods focus on the RGB-D source to target transfer task. The application of such methods cannot address the real-world scenario where the paired depth images are not hold. This paper focuses on a more flexible task that recognizes RGB test images by transferring them into the depth domain. Such a scenario retains high performance due to gaining auxiliary information but reduces the cost of pairing RGB with depth sensors at test time. Existing methods suffer from two challenges: the utilization of the additional depth features, and the domain shifting problem due to the different mechanisms between conventional RGB cameras and depth sensors. As a step towards bridging the gap, we propose a novel method called adaptive Visual-Depth Fusion Transfer (aVDFT) which can take advantage of the depth information and handle the domain distribution mismatch simultaneously. Our key novelties are: (1) a global visual-depth metric construction algorithm that can effectively align RGB and depth data structure; (2) adaptive transformed component extraction for target domain that conditioned on invariant transfer on location, scale and depth measurement. To demonstrate the effectiveness of aVDFT, we conduct comprehensive experiments on six pairs of RGB-D datasets for object recognition, scene classification and gender recognition and demonstrate state-of-the-art performance.
引用
收藏
页码:56 / 73
页数:18
相关论文
共 50 条
  • [21] Adaptive Hyper-Feature Fusion for Visual Tracking
    Chen, Zhi
    Du, Yongzhao
    Deng, Jianhua
    Zhuang, Jiafu
    Liu, Peizhong
    IEEE ACCESS, 2020, 8 : 68711 - 68724
  • [22] Adaptive Relevance Feedback for Fusion of Text and Visual Features
    Kaliciak, Leszek
    Myrhaug, Hans
    Goker, Ayse
    Song, Dawei
    2015 18TH INTERNATIONAL CONFERENCE ON INFORMATION FUSION (FUSION), 2015, : 1322 - 1329
  • [23] Relational reasoning and adaptive fusion for visual question answering
    Shen, Xiang
    Han, Dezhi
    Zong, Liang
    Guo, Zihan
    Hua, Jie
    APPLIED INTELLIGENCE, 2024, 54 (06) : 5062 - 5080
  • [24] Depth Adaptive Zooming Visual Servoing for a Robot with a Zooming Camera
    Xin, Jing
    Chen, Kemin
    Bai, Lei
    Liu, Ding
    Zhang, Jian
    INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS, 2013, 10
  • [25] Adaptive image fusion algorithm based on human visual system guided gradient transfer and total variation minimization
    Luo, Xiaoqing
    Yuan, Chenchen
    Zhang, Zhancheng
    JOURNAL OF ELECTRONIC IMAGING, 2018, 27 (05)
  • [26] Subjective Visual Comfort Assessment Based on Fusion Time for Depth Information
    Yue, Guanghui
    Hou, Chunping
    Lu, Kaining
    Feng, Dandan
    Li, Yao
    2016 11TH INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE & EDUCATION (ICCSE), 2016, : 733 - 737
  • [27] Residual Vision Transformer and Adaptive Fusion Autoencoders for Monocular Depth Estimation
    Yang, Wei-Jong
    Wu, Chih-Chen
    Yang, Jar-Ferr
    SENSORS, 2025, 25 (01)
  • [28] Deformable Enhancement and Adaptive Fusion for Depth Map Super-Resolution
    Liu, Peng
    Zhang, Zonghua
    Meng, Zhaozong
    Gao, Nan
    IEEE SIGNAL PROCESSING LETTERS, 2022, 29 : 204 - 208
  • [29] Moving Object Tracking Algorithm Based on Depth Feature Adaptive Fusion
    Yang Rui
    Zhang Baohua
    Zhang Yanyue
    Lu Xiaoqi
    Gu Yu
    Wang Yueming
    Liu Xin
    Ren Yan
    Li Jianjun
    LASER & OPTOELECTRONICS PROGRESS, 2020, 57 (18)
  • [30] Adaptive Fusion Feature Transfer Learning Method For NILM
    Li, Keqin
    Feng, Jian
    Zhang, Juan
    Xiao, Qi
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72