Spoken Moments: Learning Joint Audio-Visual Representations from Video Descriptions

被引:16
|
作者
Monfort, Mathew [1 ]
Jin, SouYoung [1 ]
Liu, Alexander [1 ]
Harwath, David [2 ]
Feris, Rogerio [3 ]
Glass, James [1 ]
Oliva, Aude [1 ]
机构
[1] MIT, Cambridge, MA 02139 USA
[2] UT Austin, Austin, TX USA
[3] IBM Res, Yorktown Hts, NY USA
关键词
D O I
10.1109/CVPR46437.2021.01463
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
When people observe events, they are able to abstract key information and build concise summaries of what is happening. These summaries include contextual and semantic information describing the important high-level details (what, where, who and how) of the observed event and exclude background information that is deemed unimportant to the observer. With this in mind, the descriptions people generate for videos of different dynamic events can greatly improve our understanding of the key information of interest in each video. These descriptions can be captured in captions that provide expanded attributes for video labeling (e.g. actions/objects/scenes/sentiment/etc.) while allowing us to gain new insight into what people find important or necessary to summarize specific events. Existing caption datasets for video understanding are either small in scale or restricted to a specific domain. To address this, we present the Spoken Moments (S-MiT) dataset of 500k spoken captions each attributed to a unique short video depicting a broad range of different events. We collect our descriptions using audio recordings to ensure that they remain as natural and concise as possible while allowing us to scale the size of a large classification dataset. In order to utilize our proposed dataset, we present a novel Adaptive Mean Margin (AMM) approach to contrastive learning and evaluate our models on video/caption retrieval on multiple datasets. We show that our AMM approach consistently improves our results and that models trained on our Spoken Moments dataset generalize better than those trained on other videocaption datasets. http://moments.csail.mit.edu/spoken.html
引用
收藏
页码:14866 / 14876
页数:11
相关论文
共 50 条
  • [41] Video concept detection by audio-visual grouplets
    Jiang, Wei
    Loui, Alexander C.
    [J]. INTERNATIONAL JOURNAL OF MULTIMEDIA INFORMATION RETRIEVAL, 2012, 1 (04) : 223 - 238
  • [42] Audio-Visual Learning: A Comment on Research
    Allen, William H.
    [J]. SCHOOL AND SOCIETY, 1953, 78 (2014): : 55 - 57
  • [43] Deep Audio-visual Learning: A Survey
    Hao Zhu
    Man-Di Luo
    Rui Wang
    Ai-Hua Zheng
    Ran He
    [J]. Machine Intelligence Research, 2021, 18 (03) : 351 - 376
  • [44] Deep Audio-visual Learning: A Survey
    Hao Zhu
    Man-Di Luo
    Rui Wang
    Ai-Hua Zheng
    Ran He
    [J]. International Journal of Automation and Computing, 2021, 18 : 351 - 376
  • [45] Deep Audio-visual Learning: A Survey
    Zhu, Hao
    Luo, Man-Di
    Wang, Rui
    Zheng, Ai-Hua
    He, Ran
    [J]. INTERNATIONAL JOURNAL OF AUTOMATION AND COMPUTING, 2021, 18 (03) : 351 - 376
  • [46] Celebrating excellence in audio-visual representations in market research
    Caldwell, Marylouise
    [J]. QUALITATIVE MARKET RESEARCH, 2010, 13 (01):
  • [47] Joint Audio-Visual Attention with Contrastive Learning for More General Deepfake Detection
    Zhang, Yibo
    Lin, Weiguo
    Xu, Junfeng
    [J]. ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2024, 20 (05)
  • [48] Towards Audio-Visual Saliency Prediction for Omnidirectional Video with Spatial Audio
    Chao, Fang-Yi
    Ozcinar, Cagri
    Zhang, Lu
    Hamidouche, Wassim
    Deforges, Olivier
    Smolic, Aljosa
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON VISUAL COMMUNICATIONS AND IMAGE PROCESSING (VCIP), 2020, : 355 - 358
  • [49] Perceptual Quality of Audio-Visual Content with Common Video and Audio Degradations
    Becerra Martinez, Helard
    Hines, Andrew
    Farias, Mylene C. Q.
    [J]. APPLIED SCIENCES-BASEL, 2021, 11 (13):
  • [50] Transfer Learning from Audio-Visual Grounding to Speech Recognition
    Hsu, Wei-Ning
    Harwath, David
    Glass, James
    [J]. INTERSPEECH 2019, 2019, : 3242 - 3246