Emotional impact extraction and analysis in images
被引:0
|
作者:
Gbehounou, Syntyche
论文数: 0引用数: 0
h-index: 0
机构:
Univ Poitiers, Dept SIC Lab XLIM, UMR CNRS 7252, F-86962 Futuroscope, FranceUniv Poitiers, Dept SIC Lab XLIM, UMR CNRS 7252, F-86962 Futuroscope, France
Gbehounou, Syntyche
[1
]
Lecellier, Francois
论文数: 0引用数: 0
h-index: 0
机构:
Univ Poitiers, Dept SIC Lab XLIM, UMR CNRS 7252, F-86962 Futuroscope, FranceUniv Poitiers, Dept SIC Lab XLIM, UMR CNRS 7252, F-86962 Futuroscope, France
Lecellier, Francois
[1
]
Fernandez-Maloigne, Christine
论文数: 0引用数: 0
h-index: 0
机构:
Univ Poitiers, Dept SIC Lab XLIM, UMR CNRS 7252, F-86962 Futuroscope, FranceUniv Poitiers, Dept SIC Lab XLIM, UMR CNRS 7252, F-86962 Futuroscope, France
Fernandez-Maloigne, Christine
[1
]
机构:
[1] Univ Poitiers, Dept SIC Lab XLIM, UMR CNRS 7252, F-86962 Futuroscope, France
emotions;
classification;
artificial neural network;
SVM;
psycho-visual tests;
color images;
COLOR PREFERENCE;
FEATURES;
D O I:
10.3166/TS.29.409-432
中图分类号:
TP18 [人工智能理论];
学科分类号:
081104 ;
0812 ;
0835 ;
1405 ;
摘要:
This paper proposes a method to extract the emotional impact of images based on accurate and low level features. We supposed their accuracy could also implicitly encode high-level interesting or discriminant information for emotional impact extraction. Emotions are often associated with facial expressions, but we decided not to consider this feature as first emotional characteristic of natural images, which, in general, does not contain faces. Using this statement, our tests have been done on a new image database composed of low semantic diversified images. The complexity of emotion modeling was considered in classification process through psycho-visual tests. The twenty five observers assessed the nature and the power of emotions felt. For the nature of the emotion they had the choice between "Negative", "Neutral" and "Positive" and the power ranged from "Low" to "High". With the nature of emotions, we made a classification in three classes of emotions. The average success rate is 56.15 % for artificial neural network and 55.25 % for the SVM classifier; that is really relevant regarding the equivalent results in the literature.