Interfacing Assessment using Facial Expression Recognition

被引:0
|
作者
Andersen, Rune A. [1 ]
Nasrollahi, Kamal [1 ]
Moeslund, Thomas B. [1 ]
Haque, Mohammad A. [1 ]
机构
[1] Aalborg Univ, Visual Anal People Lab, Sofiendalsvej 11, DK-9200 Aalborg, Denmark
关键词
Facial Expression Recognition; Interfacing Technologies; Motion Controlled; Gamepad; SELF;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
One of the most important issues in gaming is deciding about the employed interfacing technology. Gamepad has traditionally been a popular interfacing technology for the gaming industry, but, recently motion controlled interfacing has been used widely in this industry. This is exactly the purpose of this paper to study whether the motion controlled interface is a feasible alternative to the gamepad, when evaluated from a user experience point of view. To do so, a custom game has been developed and 25 test subjects have been asked to play the game using both types of interfaces. To evaluate the users experiences during the game, their hedonic and pragmatic quality are assessed using both subjective and objective evaluation methods in order to crossvalidate the obtained results. An application of computer vision, facial expression recognition, has been used as a non- obtrusive objective and hedonic measure. While, the score obtained by the user during the game has been used as a pragmatic quality measure. The use of facial expression recognition has, to the best of our knowledge, not been used before to assess the hedonic quality of interfaces for games. The thorough experimental results show that the user experience of the motion controlled interface is significantly better than the gamepad interface, both in terms of hedonic and pragmatic quality. The facial expression recognition system proved to be a useful non- obtrusive way to objectively evaluate the hedonic quality of the interfacing technologies.
引用
收藏
页码:186 / 193
页数:8
相关论文
共 50 条
  • [21] Facial expression recognition using sparse coding
    Abdolali, Maryam
    Rahmati, Mohammad
    Iranian Conference on Machine Vision and Image Processing, MVIP, 2013, : 150 - 153
  • [22] Facial Expression Recognition Using Supervised Learning
    Suneeta, V. B.
    Purushottam, P.
    Prashantkumar, K.
    Sachin, S.
    Supreet, M.
    COMPUTATIONAL VISION AND BIO-INSPIRED COMPUTING, 2020, 1108 : 275 - 285
  • [23] FACIAL EXPRESSION RECOGNITION USING HOUGH FOREST
    Hsu, Chi-Ting
    Hsu, Shih-Chung
    Huang, Chung-Lin
    2013 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA), 2013,
  • [24] Using Deep Autoencoders for Facial Expression Recognition
    Usman, Muhammad
    Latif, Siddique
    Qadir, Junaid
    2017 13TH INTERNATIONAL CONFERENCE ON EMERGING TECHNOLOGIES (ICET 2017), 2017,
  • [25] Facial Expression Recognition Using CNN with Keras
    Khopkar, Apeksha
    Saxena, Ashish Adholiya
    BIOSCIENCE BIOTECHNOLOGY RESEARCH COMMUNICATIONS, 2021, 14 (05): : 47 - 50
  • [26] FACIAL EXPRESSION RECOGNITION USING VFC AND SNAKES
    Kulkarni, Neha
    Kulkarni, Snehal
    Pardeshi, Miteshwari
    Sanap, Prajakta
    Ghuse, N. D.
    2015 INTERNATIONAL CONFERENCE ON GREEN COMPUTING AND INTERNET OF THINGS (ICGCIOT), 2015, : 991 - 994
  • [27] Using Facial Expression Recognition for Crowd Monitoring
    Holder, Ross Philip
    Tapamo, Jules-Raymond
    IMAGE AND VIDEO TECHNOLOGY (PSIVT 2017), 2018, 10749 : 463 - 476
  • [28] FACIAL EXPRESSION RECOGNITION USING FUZZY LAPLACIANFACES
    Zhi, Ruicong
    Ruan, Qiuqi
    INTERNATIONAL JOURNAL OF INNOVATIVE COMPUTING INFORMATION AND CONTROL, 2010, 6 (05): : 1999 - 2011
  • [29] FACIAL EXPRESSION RECOGNITION USING ENSEMBLE OF CLASSIFIERS
    Zavaschi, T. H. H.
    Koerich, A. L.
    Oliveira, L. E. S.
    2011 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2011, : 1489 - 1492
  • [30] Facial Expression Recognition Using Sparse Coding
    Abdolali, Maryam
    Rahmati, Mohammad
    2013 8TH IRANIAN CONFERENCE ON MACHINE VISION & IMAGE PROCESSING (MVIP 2013), 2013, : 150 - 153