A Multi-View Learning Approach To Deception Detection

被引:12
|
作者
Carissimi, Nicolo [1 ]
Beyan, Cigdem [1 ]
Murino, Vittorio [1 ,2 ]
机构
[1] Ist Italiano Tecnol, Pattern Anal & Comp Vis PAVIS, Genoa, Italy
[2] Univ Verona, Dept Comp Sci, Verona, Italy
关键词
Deception detection; nonverbal behavior; multi-view learning; multiple kernel learning; deep learning; social interactions;
D O I
10.1109/FG.2018.00095
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, automatic deception detection has gained momentum thanks to advances in computer vision, computational linguistics and machine learning research fields. The majority of the work in this area focused on written deception and analysis of verbal features. However, according to psychology, people display various nonverbal behavioral cues, in addition to verbal ones, while lying. Therefore, it is important to utilize additional modalities such as video and audio to detect deception accurately. When multi-modal data was used for deception detection, previous studies concatenated all verbal and nonverbal features into a single vector. This concatenation might not be meaningful, because different feature groups can have different statistical properties, leading to lower classification accuracy. Following this intuition, we apply, for the first time in deception detection, a multi-view learning (MVL) approach, where each view corresponds to a feature group. This results in improved classification results over the state of the art methods. Additionally, we show that the optimized parameters of the MVL algorithm can give insights into the contribution of each feature group to the final results, thus revealing the importance of each feature and eliminating the need of performing feature selection as well. Finally, we focus on analyzing face-based low level, not hand crafted features, which are extracted using various pre-trained Deep Neural Networks (DNNs), showing that face is the most important nonverbal cue for the detection of deception.
引用
收藏
页码:599 / 606
页数:8
相关论文
共 50 条
  • [1] A Multi-view Learning Approach to Foreground Detection for Traffic Surveillance Applications
    Wang, Kunfeng
    Liu, Yuqiang
    Gou, Chao
    Wang, Fei-Yue
    [J]. IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2016, 65 (06) : 4144 - 4158
  • [2] Statistical learning of multi-view face detection
    Li, SZ
    Zhu, L
    Zhang, ZQ
    Blake, A
    Zhang, HJ
    Shum, H
    [J]. COMPUTER VISION - ECCV 2002, PT IV, 2002, 2353 : 67 - 81
  • [3] Multi-view learning for bronchovascular pair detection
    Prasad, M
    Sowmya, A
    [J]. Proceedings of the 2004 Intelligent Sensors, Sensor Networks & Information Processing Conference, 2004, : 587 - 592
  • [4] Multi-View Learning for Repackaged Malware Detection
    Singh, Shirish
    Chaturvedy, Kushagra
    Mishra, Bharavi
    [J]. ARES 2021: 16TH INTERNATIONAL CONFERENCE ON AVAILABILITY, RELIABILITY AND SECURITY, 2021,
  • [5] A deep learning approach based on multi-view consensus for SQL injection detection
    Arzu Gorgulu Kakisim
    [J]. International Journal of Information Security, 2024, 23 : 1541 - 1556
  • [6] A deep learning approach based on multi-view consensus for SQL injection detection
    Kakisim, Arzu Gorgulu
    [J]. INTERNATIONAL JOURNAL OF INFORMATION SECURITY, 2024, 23 (2) : 1541 - 1556
  • [7] An ensemble approach to multi-view multi-instance learning
    Cano, Alberto
    [J]. KNOWLEDGE-BASED SYSTEMS, 2017, 136 : 46 - 57
  • [8] Multi-view representation learning for multi-view action recognition
    Hao, Tong
    Wu, Dan
    Wang, Qian
    Sun, Jin-Sheng
    [J]. JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2017, 48 : 453 - 460
  • [9] MULTI-VIEW METRIC LEARNING FOR MULTI-VIEW VIDEO SUMMARIZATION
    Wang, Linbo
    Fang, Xianyong
    Guo, Yanwen
    Fu, Yanwei
    [J]. 2016 INTERNATIONAL CONFERENCE ON CYBERWORLDS (CW), 2016, : 179 - 182
  • [10] Multi-View Object Detection Based on Deep Learning
    Tang, Cong
    Ling, Yongshun
    Yang, Xing
    Jin, Wei
    Zheng, Chao
    [J]. APPLIED SCIENCES-BASEL, 2018, 8 (09):