机构:
IIT Madras, Dept Comp Sci & Engn, Visualizat & Percept Lab, Madras, Tamil Nadu, IndiaIIT Madras, Dept Comp Sci & Engn, Visualizat & Percept Lab, Madras, Tamil Nadu, India
Ramaswamy, Janani
[1
]
Das, Sukhendu
论文数: 0引用数: 0
h-index: 0
机构:
IIT Madras, Dept Comp Sci & Engn, Visualizat & Percept Lab, Madras, Tamil Nadu, IndiaIIT Madras, Dept Comp Sci & Engn, Visualizat & Percept Lab, Madras, Tamil Nadu, India
Das, Sukhendu
[1
]
机构:
[1] IIT Madras, Dept Comp Sci & Engn, Visualizat & Percept Lab, Madras, Tamil Nadu, India
来源:
2020 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV)
|
2020年
关键词:
SEPARATING STYLE;
D O I:
暂无
中图分类号:
TP18 [人工智能理论];
学科分类号:
081104 ;
0812 ;
0835 ;
1405 ;
摘要:
For every event occurring in the real world, most often a sound is associated with the corresponding visual scene. Humans possess an inherent ability to automatically map the audio content with visual scenes leading to an effortless and enhanced understanding of the underlying event. This triggers an interesting question: Can this natural correspondence between video and audio, which has been diminutively explored so far, be learned by a machine and modeled jointly to localize the sound source in a visual scene? In this paper, we propose a novel algorithm that addresses the problem of localizing sound source in unconstrained videos, which uses efficient fusion and attention mechanisms. Two novel blocks namely, Audio Visual Fusion Block (AVFB) and Segment-Wise Attention Block (SWAB) have been developed for this purpose. Quantitative and qualitative evaluations show that it is feasible to use the same algorithm with minor modifications to serve the purpose of sound localization using three different types of learning: supervised, weakly supervised and unsupervised. A novel Audio Visual Triplet Gram Matrix Loss (AVTGML) has been proposed as a loss function to learn the localization in an unsupervised way. Our empirical evaluations demonstrate a significant increase in performance over the existing state-of-the-art methods, serving as a testimony to the superiority of our proposed approach.
机构:
Hong Kong Univ Sci & Technol, Dept Phys, Kowloon, Clear Water Bay, Hong Kong, Peoples R China
Princeton Univ, Dept Phys, Princeton, NJ 08544 USAHong Kong Univ Sci & Technol, Dept Phys, Kowloon, Clear Water Bay, Hong Kong, Peoples R China
Song, Zhida
Dai, Xi
论文数: 0引用数: 0
h-index: 0
机构:
Hong Kong Univ Sci & Technol, Dept Phys, Kowloon, Clear Water Bay, Hong Kong, Peoples R ChinaHong Kong Univ Sci & Technol, Dept Phys, Kowloon, Clear Water Bay, Hong Kong, Peoples R China