Weakly Paired Associative Learning for Sound and Image Representations via Bimodal Associative Memory

被引:5
|
作者
Lee, Sangmin [1 ]
Kim, Hyung-Il [2 ]
Ro, Yong Man [1 ]
机构
[1] Korea Adv Inst Sci & Technol, Daejeon, South Korea
[2] ETRI, Daejeon, South Korea
关键词
D O I
10.1109/CVPR52688.2022.01028
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Data representation learning without labels has attracted increasing attention due to its nature that does not require human annotation. Recently, representation learning has been extended to bimodal data, especially sound and image which are closely related to basic human senses. Existing sound and image representation learning methods necessarily require a large number of sound and image with corresponding pairs. Therefore, it is difficult to ensure the effectiveness of the methods in the weakly paired condition, which lacks paired bimodal data. In fact, according to human cognitive studies, the cognitive functions in the human brain for a certain modality can be enhanced by receiving other modalities, even not directly paired ones. Based on the observation, we propose a new problem to deal with the weakly paired condition: How to boost a certain modal representation even by using other unpaired modal data. To address the issue, we introduce a novel bimodal associative memory (BMA-Memory) with key-value switching. It enables to build sound-image association with small paired bimodal data and to boost the built association with the easily obtainable large amount of unpaired data. Through the proposed associative learning, it is possible to reinforce the representation of a certain modality (e.g., sound) even by using other unpaired modal data (e.g., images).
引用
收藏
页码:10524 / 10533
页数:10
相关论文
共 50 条