Research on image affection tagging based on multi-modality information fusion

被引:0
|
作者
Tang Z. [1 ]
Liu X. [1 ]
Yang H. [1 ]
Lu C. [1 ]
机构
[1] Industrial Design Institute, Zhejiang University of Technology, Hangzhou
关键词
Electroencephalogram; Image affection tagging; Image content; International affective picture system; Multi-modality information fusion;
D O I
10.13196/j.cims.2020.01.014
中图分类号
学科分类号
摘要
With the increase of image resource in the web, affection, as one of important semantics of image, is the essential basic to retrieve and select images. Therefore, the image affection tagging has been widely concerned. In this paper, we propose an image affection tagging method based on multi-modality information fusion(EEG and image content). Firstly, spectral features of EEG and color and texture features of image are extracted. Then, based on two kinds of features and two kinds of fusion strategy(feature level and decision level), support vector machine(SVM)classification model is built for the image affection tagging and classification. The IAPS data set is used to evaluate the effectiveness of the proposed method. The results demonstrate that the image affection tagging method based on multi-modality information fusion has a better classification performance than the method only using EEG feature or image feature. Besides, our proposed method contribute to narrowing the semantic gap between low-level visual features and high-level emotion semantic. © 2020, Editorial Department of CIMS. All right reserved.
引用
下载
收藏
页码:134 / 144
页数:10
相关论文
共 42 条
  • [1] Lu Q., Chen J., Ding H., Automatic image emotion categorization and annotation based on social tags, Library and Information Service, 58, 12, pp. 118-123, (2014)
  • [2] Uricchio T., Bertini M., Seidenari L., Et al., Fisher encoded convolutional bag-of-windows for efficient image retrieval and social image tagging, Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 9-15, (2015)
  • [3] Deng Z.H., Yu H., Yang Y., Image tagging via cross-modal semantic mapping, Proceedings of the 23rd ACM international conference on Multimedia, pp. 1143-1146, (2015)
  • [4] Bellman C., Alomari R., Fung A., Et al., Challenges in the effectiveness of image tagging using consumer-grade brain-computer interfaces, Proceedings of International Conference on Augmented Reality, Virtual Reality and Computer Graphics, pp. 55-64, (2016)
  • [5] Wu G., Research on EEG-based emotional tagging of videos and affective brain-computer interface, (2012)
  • [6] Soleymani M., Lichtenauer J., Pun T., Et al., A multimodal database for affect recognition and implicit tagging, IEEE Transactions on Affective Computing, 3, 1, pp. 42-55, (2012)
  • [7] Koelstra S., Patras I., Fusion of facial expressions and EEG for implicit affective tagging, Image and Vision Computing, 31, 2, pp. 164-174, (2013)
  • [8] Soleymani M., Pantic M., Human-centered implicit tagging: Overview and perspectives, Proceedings of 2012 IEEE International Conference on Systems, Man, and Cybernetics(SMC), pp. 3304-3309, (2012)
  • [9] Yazdani A., Lee J.-S., Ebrahimi T., Implicit emotional tagging of multimedia using EEG signals and brain computer interface, Proceedings of the First SIGMM Workshop on Social Media, pp. 81-88, (2009)
  • [10] Liu H., Research emotional image retrieval based on EEG signal, (2012)