Audio-Visual Sentiment Analysis for Learning Emotional Arcs in Movies

被引:20
|
作者
Chu, Eric [1 ]
Roy, Deb [1 ]
机构
[1] MIT, Media Lab, Cambridge, MA 02139 USA
关键词
visual sentiment analysis; audio sentiment analysis; multimodal; emotions; emotional arcs; stories; video;
D O I
10.1109/ICDM.2017.100
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Stories can have tremendous power - not only useful for entertainment, they can activate our interests and mobilize our actions. The degree to which a story resonates with its audience may be in part reflected in the emotional journey it takes the audience upon. In this paper, we use machine learning methods to construct emotional arcs in movies, calculate families of arcs, and demonstrate the ability for certain arcs to predict audience engagement. The system is applied to Hollywood films and high quality shorts found on the web. We begin by using deep convolutional neural networks for audio and visual sentiment analysis. These models are trained on both new and existing large-scale datasets, after which they can be used to compute separate audio and visual emotional arcs. We then crowdsource annotations for 30-second video clips extracted from highs and lows in the arcs in order to assess the micro-level precision of the system, with precision measured in terms of agreement in polarity between the system's predictions and annotators' ratings. These annotations are also used to combine the audio and visual predictions. Next, we look at macro-level characterizations of movies by investigating whether there exist 'universal shapes' of emotional arcs. In particular, we develop a clustering approach to discover distinct classes of emotional arcs. Finally, we show on a sample corpus of short web videos that certain emotional arcs are statistically significant predictors of the number of comments a video receives. These results suggest that the emotional arcs learned by our approach successfully represent macroscopic aspects of a video story that drive audience engagement. Such machine understanding could be used to predict audience reactions to video stories, ultimately improving our ability as storytellers to communicate with each other.
引用
收藏
页码:829 / 834
页数:6
相关论文
共 50 条
  • [21] EMID: An Emotional Aligned Dataset in Audio-Visual Modality
    Zou, Jialing
    Mei, Jiahao
    Ye, Guangze
    Huai, Tianyu
    Shen, Qiwei
    Dong, Daoguo
    [J]. PROCEEDINGS OF THE 1ST INTERNATIONAL WORKSHOP ON MULTIMEDIA CONTENT GENERATION AND EVALUATION, MCGE 2023: New Methods and Practice, 2023, : 41 - 48
  • [22] Emotional Audio-Visual Speech Synthesis Based on PAD
    Jia, Jia
    Zhang, Shen
    Meng, Fanbo
    Wang, Yongxin
    Cai, Lianhong
    [J]. IEEE TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2011, 19 (03): : 570 - 582
  • [23] CHEAVD: a Chinese natural emotional audio-visual database
    Li, Ya
    Tao, Jianhua
    Chao, Linlin
    Bao, Wei
    Liu, Yazhu
    [J]. JOURNAL OF AMBIENT INTELLIGENCE AND HUMANIZED COMPUTING, 2017, 8 (06) : 913 - 924
  • [24] A Cantonese Audio-Visual Emotional Speech (CAVES) dataset
    Chee Seng Chong
    Chris Davis
    Jeesun Kim
    [J]. Behavior Research Methods, 2024, 56 (5) : 5264 - 5278
  • [25] BUILDING A CHINESE NATURAL EMOTIONAL AUDIO-VISUAL DATABASE
    Bao, Wei
    Li, Ya
    Gu, Mingliang
    Yang, Minghao
    Li, Hao
    Chao, Linlin
    Tao, Jianhua
    [J]. 2014 12TH INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING (ICSP), 2014, : 583 - 587
  • [26] Audio-Visual Recognition of Emotional Engagement of People with Dementia
    Steinert, Lars
    Putze, Felix
    Kuster, Dennis
    Schultz, Tanja
    [J]. INTERSPEECH 2021, 2021, : 1024 - 1028
  • [27] A Method for Extraction of Affective Audio-Visual Facial Clips from Movies
    Turan, Cigdem
    Kansin, Can
    Zhalehpour, Sara
    Aydin, Zafer
    Erdem, Cigdem Eroglu
    [J]. 2013 21ST SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE (SIU), 2013,
  • [28] Multimodal Learning Using 3D Audio-Visual Data or Audio-Visual Speech Recognition
    Su, Rongfeng
    Wang, Lan
    Liu, Xunying
    [J]. 2017 INTERNATIONAL CONFERENCE ON ASIAN LANGUAGE PROCESSING (IALP), 2017, : 40 - 43
  • [29] Objectivization of Audio-Visual Correlation Analysis
    Kunka, Bartosz
    Kostek, Bozena
    [J]. ARCHIVES OF ACOUSTICS, 2012, 37 (01) : 63 - 72
  • [30] Audio-Visual Class-Incremental Learning
    Pian, Weiguo
    Mo, Shentong
    Guo, Yunhui
    Tian, Yapeng
    [J]. 2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 7765 - 7777