An evidential data fusion method for affective music video retrieval

被引:17
|
作者
Nemati, Shahla [1 ]
Naghsh-Nilchi, Ahmad Reza [2 ]
机构
[1] Univ Isfahan, Fac Comp Engn, Dept Comp Architecture, Esfahan, Iran
[2] Univ Isfahan, Fac Comp Engn, Dept Artificial Intelligent, Esfahan, Iran
关键词
Affective music video retrieval; Dempster-Shafer theory; information fusion; information retrieval; emotion detection; RECOGNITION; COMBINATION; FRAMEWORK; RULE;
D O I
10.3233/IDA-160029
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Affective video retrieval systems seek to retrieve video contents concerning their impact on viewers' emotions. These systems typically apply a multimodal approach that fuses information from different modalities to specify the affect category. The main drawback of existing information fusion methods exploited in affective video retrieval systems is that they consider all modalities equally important; hence they ignore conflicts among modalities. In order to address this drawback, a new information fusion method is proposed based on the Dempster-Shafer theory of evidence. This proposed method assigns different weights to modalities based on their correlation and their level of confidence. Experiments are run on the video clips of DEAP dataset. Results indicate that the proposed method outperforms existing evidential information fusion methods significantly.
引用
收藏
页码:427 / 441
页数:15
相关论文
共 50 条
  • [1] Exploiting Evidential Theory in the Fusion of Textual, Audio, and Visual Modalities for Affective Music Video Retrieval
    Nemati, Shahla
    Naghsh-Nilchi, Ahmad Reza
    [J]. 2017 3RD INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION AND IMAGE ANALYSIS (IPRIA), 2017, : 222 - 228
  • [2] Affective Visualization and Retrieval for Music Video
    Zhang, Shiliang
    Huang, Qingming
    Jiang, Shuqiang
    Gao, Wen
    Tian, Qi
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2010, 12 (06) : 510 - 522
  • [3] EmoMV: Affective music-video correspondence learning datasets for classification and retrieval
    Thao, Ha Thi Phuong
    Roig, Gemma
    Herremans, Dorien
    [J]. INFORMATION FUSION, 2023, 91 : 64 - 79
  • [4] An Unsupervised Evidential Conflict Resolution Method for Data Fusion In IoT
    Cherifi, Walid
    Szafranski, Boleslaw
    [J]. PROCEEDINGS OF THE 2017 FEDERATED CONFERENCE ON COMPUTER SCIENCE AND INFORMATION SYSTEMS (FEDCSIS), 2017, : 819 - 823
  • [5] A new method to measure the divergence in evidential sensor data fusion
    Song, Yutong
    Deng, Yong
    [J]. INTERNATIONAL JOURNAL OF DISTRIBUTED SENSOR NETWORKS, 2019, 15 (04)
  • [6] An Incremental Evidential Conflict Resolution Method for Data stream Fusion In IoT
    Cherifi, Walid
    Szafranski, Boleslaw
    [J]. PROCEEDINGS OF THE 2017 FEDERATED CONFERENCE ON COMPUTER SCIENCE AND INFORMATION SYSTEMS (FEDCSIS), 2017, : 825 - 834
  • [7] Music Video Clip Impression Emphasis Method by Font Fusion Synchronized with Music
    Nonaka, Kosuke
    Saito, Junki
    Nakamura, Satoshi
    [J]. ENTERTAINMENT COMPUTING AND SERIOUS GAMES, ICEC-JCSG 2019, 2019, 11863 : 146 - 157
  • [8] For the Affective Aesthetics of Contemporary Music Video
    Jirsa, Tomas
    [J]. MUSIC SOUND AND THE MOVING IMAGE, 2019, 13 (02) : 187 - 208
  • [9] Affective video content analysis based on multimodal data fusion in heterogeneous networks
    Guo, Jie
    Song, Bin
    Zhang, Peng
    Ma, Mengdi
    Luo, Wenwen
    Junmei
    [J]. INFORMATION FUSION, 2019, 51 : 224 - 232
  • [10] Affective Retrieval: Multimedia Data Retrieval Based on Impression
    Hochin, Teuhisa
    [J]. 2013 IEEE/ACIS 12TH INTERNATIONAL CONFERENCE ON COMPUTER AND INFORMATION SCIENCE (ICIS), 2013, : 2 - 2