Self-supervised Contrastive Cross-Modality Representation Learning for Spoken Question Answering

被引:0
|
作者
You, Chenyu [1 ]
Chen, Nuo [2 ]
Zou, Yuexian [2 ,3 ]
机构
[1] Yale Univ, Dept Elect Engn, New Haven, CT 06520 USA
[2] Peking Univ, Sch ECE, ADSPLAB, Shenzhen, Peoples R China
[3] Peng Cheng Lab, Shenzhen, Peoples R China
关键词
NETWORKS;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Spoken question answering (SQA) requires fine-grained understanding of both spoken documents and questions for the optimal answer prediction. In this paper, we propose novel training schemes for spoken question answering with a self-supervised training stage and a contrastive representation learning stage. In the self-supervised stage, we propose three auxiliary self-supervised tasks, including utterance restoration, utterance insertion, and question discrimination, and jointly train the model to capture consistency and coherence among speech documents without any additional data or annotations. We then propose to learn noise-invariant utterance representations in a contrastive objective by adopting multiple augmentation strategies, including span deletion and span substitution. Besides, we design a Temporal-Alignment attention to semantically align the speech-text clues in the learned common space and benefit the SQA tasks. By this means, the training schemes can more effectively guide the generation model to predict more proper answers. Experimental results show that our model achieves state-ofthe-art results on three SQA benchmarks.
引用
收藏
页码:28 / 39
页数:12
相关论文
共 50 条
  • [31] Overcoming Language Priors with Self-supervised Learning for Visual Question Answering
    Zhi, Xi
    Mao, Zhendong
    Liu, Chunxiao
    Zhang, Peng
    Wang, Bin
    Zhang, Yongdong
    PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, : 1083 - 1089
  • [32] Bridging the Cross-Modality Semantic Gap in Visual Question Answering
    Wang, Boyue
    Ma, Yujian
    Li, Xiaoyan
    Gao, Junbin
    Hu, Yongli
    Yin, Baocai
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2025, 36 (03) : 4519 - 4531
  • [33] SELF-SUPERVISED SPOKEN QUESTION UNDERSTANDING AND SPEAKING WITH AUTOMATIC VOCABULARY LEARNING
    Toyoda, Keisuke
    Kimura, Yusuke
    Zhang, Mingxin
    Hino, Kent
    Mori, Kosuke
    Shinozaki, Takahiro
    2021 24TH CONFERENCE OF THE ORIENTAL COCOSDA INTERNATIONAL COMMITTEE FOR THE CO-ORDINATION AND STANDARDISATION OF SPEECH DATABASES AND ASSESSMENT TECHNIQUES (O-COCOSDA), 2021, : 37 - 42
  • [34] Bridging the Cross-Modality Semantic Gap in Visual Question Answering
    Wang, Boyue
    Ma, Yujian
    Li, Xiaoyan
    Gao, Junbin
    Hu, Yongli
    Yin, Baocai
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, : 1 - 13
  • [35] Representation Learning for Cross-Modality Classification
    van Tulder, Gijs
    de Bruijne, Marleen
    MEDICAL COMPUTER VISION AND BAYESIAN AND GRAPHICAL MODELS FOR BIOMEDICAL IMAGING, 2017, 10081 : 126 - 136
  • [36] WeCromCL: Weakly Supervised Cross-Modality Contrastive Learning for Transcription-Only Supervised Text Spotting
    Wu, Jingjing
    Fang, Zhengyao
    Lyu, Pengyuan
    Zhang, Chengquan
    Chen, Fanglin
    Lu, Guangming
    Pei, Wenjie
    COMPUTER VISION - ECCV 2024, PT XXXI, 2025, 15089 : 289 - 306
  • [37] Cross-modality Multiple Relations Learning for Knowledge-based Visual Question Answering
    Wang, Yan
    Li, Peize
    Si, Qingyi
    Zhang, Hanwen
    Zang, Wenyu
    Lin, Zheng
    Fu, Peng
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2024, 20 (03)
  • [38] Boost Supervised Pretraining for Visual Transfer Learning: Implications of Self-Supervised Contrastive Representation Learning
    Sun, Jinghan
    Wei, Dong
    Ma, Kai
    Wang, Liansheng
    Zheng, Yefeng
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 2307 - 2315
  • [39] Mixing up contrastive learning: Self-supervised representation learning for time series
    Wickstrom, Kristoffer
    Kampffmeyer, Michael
    Mikalsen, Karl Oyvind
    Jenssen, Robert
    PATTERN RECOGNITION LETTERS, 2022, 155 : 54 - 61
  • [40] Self-supervised Graph-level Representation Learning with Adversarial Contrastive Learning
    Luo, Xiao
    Ju, Wei
    Gu, Yiyang
    Mao, Zhengyang
    Liu, Luchen
    Yuan, Yuhui
    Zhang, Ming
    ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA, 2024, 18 (02)