Learning Dual Encoding Model for Adaptive Visual Understanding in Visual Dialogue

被引:22
|
作者
Yu, Jing [1 ,2 ]
Jiang, Xiaoze [3 ]
Qin, Zengchang [3 ]
Zhang, Weifeng [4 ]
Hu, Yue [1 ,2 ]
Wu, Qi [5 ]
机构
[1] Chinese Acad Sci, Inst Informat Engn, Beijing 100093, Peoples R China
[2] Univ Chinese Acad Sci, Sch Cyber Secur, Beijing 100049, Peoples R China
[3] Beihang Univ, Sch ASEE, Intelligent Comp & Machine Learning Lab, Beijing 100191, Peoples R China
[4] Jiaxing Univ, Coll Math Phys & Informat Engn, Jiaxing 314001, Peoples R China
[5] Univ Adelaide, Australian Ctr Robot Vis, Adelaide, SA 5005, Australia
基金
中国国家自然科学基金;
关键词
Visualization; Semantics; History; Task analysis; Cognition; Feature extraction; Adaptation models; Dual encoding; visual module; semantic module; visual relationship; dense caption; visual dialogue;
D O I
10.1109/TIP.2020.3034494
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Different from Visual Question Answering task that requires to answer only one question about an image, Visual Dialogue task involves multiple rounds of dialogues which cover a broad range of visual content that could be related to any objects, relationships or high-level semantics. Thus one of the key challenges in Visual Dialogue task is to learn a more comprehensive and semantic-rich image representation that can adaptively attend to the visual content referred by variant questions. In this paper, we first propose a novel scheme to depict an image from both visual and semantic views. Specifically, the visual view aims to capture the appearance-level information in an image, including objects and their visual relationships, while the semantic view enables the agent to understand high-level visual semantics from the whole image to the local regions. Furthermore, on top of such dual-view image representations, we propose a Dual Encoding Visual Dialogue (DualVD) module, which is able to adaptively select question-relevant information from the visual and semantic views in a hierarchical mode. To demonstrate the effectiveness of DualVD, we propose two novel visual dialogue models by applying it to the Late Fusion framework and Memory Network framework. The proposed models achieve state-of-the-art results on three benchmark datasets. A critical advantage of the DualVD module lies in its interpretability. We can analyze which modality (visual or semantic) has more contribution in answering the current question by explicitly visualizing the gate values. It gives us insights in understanding of information selection mode in the Visual Dialogue task. The code is available at https://github.com/JXZe/Learning_DualVD.
引用
收藏
页码:220 / 233
页数:14
相关论文
共 50 条
  • [21] A Visual Encoding Model Based on Contrastive Self-Supervised Learning for Human Brain Activity along the Ventral Visual Stream
    Li, Jingwei
    Zhang, Chi
    Wang, Linyuan
    Ding, Penghui
    Hu, Lulu
    Yan, Bin
    Tong, Li
    BRAIN SCIENCES, 2021, 11 (08)
  • [22] ADAPTIVE ENHANCEMENT BASED ON A VISUAL MODEL
    PELI, E
    OPTICAL ENGINEERING, 1987, 26 (07) : 655 - 660
  • [23] MEMORY AND EXPECTATIONS IN LEARNING, LANGUAGE, AND VISUAL UNDERSTANDING
    SCHANK, RC
    FANO, A
    ARTIFICIAL INTELLIGENCE REVIEW, 1995, 9 (4-5) : 261 - 271
  • [24] Learning for visual semantic understanding in big data
    Yang, Yang
    Zhang, Luming
    Zhen, Yi
    Ji, Rongrong
    NEUROCOMPUTING, 2015, 169 : 1 - 4
  • [25] Deep Learning for Visual Understanding: Part 2
    Porikli, Fatih
    Shan, Shiguang
    Snoek, Cees
    Sukthankar, Rahul
    Wang, Xiaogang
    IEEE SIGNAL PROCESSING MAGAZINE, 2018, 35 (01) : 17 - 19
  • [26] OpenCL Accelerated Deep Learning for Visual Understanding
    Bottleson, Jeremy
    Kim, Sungye
    Andrews, Jeff
    Bindu, Preeti
    Murthy, Deepak N.
    Spisak, Joseph
    Jin, Jingyi
    International Workshop on OpenCL 2015, 2015,
  • [27] Shape understanding system: Learning of the visual concepts
    Zbigniew, L
    Magdalena, L
    WORLD MULTICONFERENCE ON SYSTEMICS, CYBERNETICS AND INFORMATICS, VOL 1, PROCEEDINGS: INFORMATION SYSTEMS DEVELOPMENT, 2001, : 453 - 458
  • [28] An adaptive coupled-layer visual model for robust visual tracking
    Cehovin, Luka
    Kristan, Matej
    Leonardis, Ales
    2011 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2011, : 1363 - 1370
  • [29] ADAPTIVE APPEARANCE LEARNING FOR VISUAL OBJECT TRACKING
    Khan, Zulfiqar Hasan
    Gu, Irene Yu-Hua
    2011 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2011, : 1413 - 1416
  • [30] Learning Adaptive Metric for Robust Visual Tracking
    Jiang, Nan
    Liu, Wenyu
    Wu, Ying
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2011, 20 (08) : 2288 - 2300