A Multi-Modal Neuro-Physiological Study of Malicious Insider Threats

被引:2
|
作者
Hashem, Yassir [1 ]
Takabi, Hassan [1 ]
Dantu, Ram [1 ]
Nielsen, Rodney [1 ]
机构
[1] Univ North Texas, Dept Comp Sci & Engn, Denton, TX 76203 USA
关键词
Electroencephalogram (EEG); Neuroscience; Eye tracking; Insider Threat; CLASSIFICATION;
D O I
10.1145/3139923.3139930
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
It has long been recognized that solutions to insider threat are mainly user-centric and several psychological and psychosocial models have been proposed. However, user behavior underlying these malicious acts is still not fully understood, motivating further investigation at the neuro-physiological level. In this work, we conduct a multi-modal study of how users' brain processes malicious and benign activities. In particular, we focus on using Electroencephalogram (EEG) signals that arise from the user's brain activities and eye tracking which can capture spontaneous responses that are unfiltered by the conscious mind. We conduct human study experiments to capture the Electroencephalogram (EEG) signals for a group of 25 participants while performing several computer-based activities in different scenarios. We analyze the EEG signals and the eye tracking data and extract features and evaluate our approach using several classifiers. The results show that our approach achieved an average accuracy of 99.77% in detecting the malicious insider using the EEG data of 256 channels (sensors) and average detection accuracy up to 95.64% using only five channels (sensors). The results show an average detection accuracy up to 83% using the eye movements and pupil behaviors data. In general, our results indicates that human Electroencephalogram (EEG) signals and eye tracking data can reveal valuable knowledge about user's malicious intent and can be used as an effective indicator in designing real-time insider threats monitoring and detection frameworks.
引用
收藏
页码:33 / 44
页数:12
相关论文
共 50 条
  • [21] Granular estimation of user cognitive workload using multi-modal physiological sensors
    Wang, Jingkun
    Stevens, Christopher
    Bennett, Winston
    Yu, Denny
    FRONTIERS IN NEUROERGONOMICS, 2024, 5
  • [22] Cognitive and Physiological Response for Health Monitoring in an Ageing Population: A Multi-modal System
    Saibene, Aurora
    Gasparini, Francesca
    INTERNET SCIENCE, INSCI 2019, 2019, 11938 : 341 - 347
  • [23] Workload categorization for hazardous industries: The semantic modelling of multi-modal physiological data
    Konig, Jemma L.
    Hinze, Annika
    Bowen, Judy
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2023, 141 : 369 - 381
  • [24] Amylopectin- assisted hydrogel conductors for multi-modal physiological signal acquisition
    Wang, Guan
    Liu, Meijia
    Zhang, Chunpeng
    Xia, Shan
    Gao, Guanghui
    Shi, Yongfeng
    EUROPEAN POLYMER JOURNAL, 2024, 207
  • [25] A Study of Multi-modal Display System with Visual Feedback
    Tanikawa, Tomohiro
    Hirose, Michitaka
    PROCEEDINGS OF THE SECOND INTERNATIONAL SYMPOSIUM ON UNIVERSAL COMMUNICATION, 2008, : 285 - 292
  • [26] MI East Midlands multi-modal study, England
    Malik, N
    PROCEEDINGS OF THE INSTITUTION OF CIVIL ENGINEERS-TRANSPORT, 2004, 157 (04) : 239 - 249
  • [27] Noncontact Sleep Study by Multi-Modal Sensor Fusion
    Chung, Ku-young
    Song, Kwangsub
    Shin, Kangsoo
    Sohn, Jinho
    Cho, Seok Hyun
    Chang, Joon-Hyuk
    SENSORS, 2017, 17 (07)
  • [28] A comparative study of multi-modal metaphors in food advertisements
    Kou, Guirong
    Liang, Yuan
    SEMIOTICA, 2022, 2022 (249) : 275 - 291
  • [29] Multi-Modal Features Representation-Based Convolutional Neural Network Model for Malicious Website Detection
    Alsaedi, Mohammed
    Ghaleb, Fuad A.
    Saeed, Faisal
    Ahmad, Jawad
    Alasli, Mohammed
    IEEE ACCESS, 2024, 12 : 7271 - 7284
  • [30] PowerDetector: Malicious PowerShell Script Family Classification Based on Multi-Modal Semantic Fusion and Deep Learning
    Yang, Xiuzhang
    Peng, Guojun
    Zhang, Dongni
    Gao, Yuhang
    Li, Chenguang
    CHINA COMMUNICATIONS, 2023, 20 (11) : 202 - 224