Learning Visual Storylines with Skipping Recurrent Neural Networks

被引:15
|
作者
Sigurdsson, Gunnar A. [1 ]
Chen, Xinlei [1 ]
Gupta, Abhinav [1 ]
机构
[1] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
来源
关键词
D O I
10.1007/978-3-319-46454-1_5
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
What does a typical visit to Paris look like? Do people first take photos of the Louvre and then the Eiffel Tower? Can we visually model a temporal event like "Paris Vacation" using current frameworks? In this paper, we explore how we can automatically learn the temporal aspects, or storylines of visual concepts from web data. Previous attempts focus on consecutive image-to-image transitions and are unsuccessful at recovering the long-term underlying story. Our novel Skipping Recurrent Neural Network (S-RNN) model does not attempt to predict each and every data point in the sequence, like classic RNNs. Rather, S-RNN uses a framework that skips through the images in the photo stream to explore the space of all ordered subsets of the albums via an efficient sampling procedure. This approach reduces the negative impact of strong short-term correlations, and recovers the latent story more accurately. We show how our learned storylines can be used to analyze, predict, and summarize photo albums from Flickr. Our experimental results provide strong qualitative and quantitative evidence that S-RNN is significantly better than other candidate methods such as LSTMs on learning long-term correlations and recovering latent storylines. Moreover, we show how storylines can help machines better understand and summarize photo streams by inferring a brief personalized story of each individual album.
引用
收藏
页码:71 / 88
页数:18
相关论文
共 50 条
  • [31] A CONVERGENCE RESULT FOR LEARNING IN RECURRENT NEURAL NETWORKS
    KUAN, CM
    HORNIK, K
    WHITE, H
    NEURAL COMPUTATION, 1994, 6 (03) : 420 - 440
  • [32] LEARNING ATTENTIONAL RECURRENT NEURAL NETWORK FOR VISUAL TRACKING
    Wang, Qiurui
    Yuan, Chun
    Lin, Zhihui
    2017 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2017, : 1237 - 1242
  • [33] Learning Attentional Recurrent Neural Network for Visual Tracking
    Wang, Qiurui
    Yuan, Chun
    Wang, Jingdong
    Zeng, Wenjun
    IEEE TRANSACTIONS ON MULTIMEDIA, 2019, 21 (04) : 930 - 942
  • [34] Learning algorithms and the shape of the learning surface in recurrent neural networks
    Watanabe, Tatsumi
    Uchikawa, Yoshiki
    Gouhara, Kazutoshi
    Systems and Computers in Japan, 1992, 23 (13): : 90 - 107
  • [35] An infomax learning algorithm for visual neural networks
    Fundamental Research Laboratories, Syst. Device and Fundamental Res., NEC Corporation, 34, Miyukigaoka, Tsukuba, Ibaraki 305-8501, Japan
    Advances in Neural Networks and Applications, 2001, : 158 - 163
  • [36] Making Convolutional Networks Recurrent for Visual Sequence Learning
    Yang, Xiaodong
    Molchanov, Pavlo
    Kautz, Jan
    2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 6469 - 6478
  • [37] Learning Recurrent Memory Activation Networks for Visual Tracking
    Pu, Shi
    Song, Yibing
    Ma, Chao
    Zhang, Honggang
    Yang, Ming-Hsuan
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 : 725 - 738
  • [38] On the improvement of the real time recurrent learning algorithm for recurrent neural networks
    Mak, MW
    Ku, KW
    Lu, YL
    NEUROCOMPUTING, 1999, 24 (1-3) : 13 - 36
  • [39] Monocular Visual Odometry Based on Recurrent Convolutional Neural Networks
    Chen Z.
    Hong Y.
    Wang J.
    Ge Z.
    Jiqiren/Robot, 2019, 41 (02): : 147 - 155
  • [40] Audio Visual Speech Recognition with Multimodal Recurrent Neural Networks
    Feng, Weijiang
    Guan, Naiyang
    Li, Yuan
    Zhang, Xiang
    Luo, Zhigang
    2017 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2017, : 681 - 688