The Role of Visual Features in Text-Based CAPTCHAs: An fNIRS Study for Usable Security

被引:0
|
作者
Mulazimoglu, Emre [1 ]
Cakir, Murat P. [2 ]
Acarturk, Cengiz [1 ,2 ]
机构
[1] Middle East Tech Univ, Cyber Secur Dept, TR-06800 Ankara, Turkey
[2] Middle East Tech Univ, Dept Cognit Sci, TR-06800 Ankara, Turkey
关键词
NEAR-INFRARED SPECTROSCOPY;
D O I
10.1155/2021/8842420
中图分类号
Q [生物科学];
学科分类号
07 ; 0710 ; 09 ;
摘要
To mitigate dictionary attacks or similar undesirable automated attacks to information systems, developers mostly prefer using CAPTCHA challenges as Human Interactive Proofs (HIPs) to distinguish between human users and scripts. Appropriate use of CAPTCHA requires a setup that balances between robustness and usability during the design of a challenge. The previous research reveals that most usability studies have used accuracy and response time as measurement criteria for quantitative analysis. The present study aims at applying optical neuroimaging techniques for the analysis of CAPTCHA design. The functional Near-Infrared Spectroscopy technique was used to explore the hemodynamic responses in the prefrontal cortex elicited by CAPTCHA stimulus of varying types. The findings suggest that regions in the left and right dorsolateral and right dorsomedial prefrontal cortex respond to the degrees of line occlusion, rotation, and wave distortions present in a CAPTCHA. The systematic addition of the visual effects introduced nonlinear effects on the behavioral and prefrontal oxygenation measures, indicative of the emergence of Gestalt effects that might have influenced the perception of the overall CAPTCHA figure.
引用
收藏
页数:24
相关论文
共 50 条
  • [41] Text-instance graph: Exploring the relational semantics for text-based visual question answering
    Li, Xiangpeng
    Wu, Bo
    Song, Jingkuan
    Gao, Lianli
    Zeng, Pengpeng
    Gan, Chuang
    [J]. PATTERN RECOGNITION, 2022, 124
  • [42] An Empirical Study of CLIP for Text-Based Person Search
    Cao, Min
    Bai, Yang
    Zeng, Ziyin
    Ye, Mang
    Zhang, Min
    [J]. THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 1, 2024, : 465 - 473
  • [43] Ensuring the security of text-based information transmission by utilizing invisible ASCII characters
    Ma Z.
    Zhu E.
    Zhang Y.
    Zhang C.
    [J]. International Journal of Simulation: Systems, Science and Technology, 2016, 17 (45): : 34.1 - 34.5
  • [44] Systemic functional political discourse: A text-based study
    Chen, Wenliang
    Du, Lijuan
    [J]. JOURNAL OF LANGUAGE AND POLITICS, 2023, 22 (04) : 568 - 572
  • [46] Image Classification for Trend Prediction Based on Integration of fNIRS and Visual Features
    Horii, Kazaha
    Maeda, Keisuke
    Ogawa, Takahiro
    Haseyama, Miki
    [J]. 2017 IEEE 6TH GLOBAL CONFERENCE ON CONSUMER ELECTRONICS (GCCE), 2017,
  • [47] Text-based mentoring for postpartum mothers: a feasibility study
    Martin, Eleanor
    Weiland, Christina
    Page, Lindsay C.
    [J]. EARLY CHILD DEVELOPMENT AND CARE, 2020, 190 (10) : 1537 - 1560
  • [48] Image Classification Based on the Combination of Text Features and Visual Features
    Tian, Lexiao
    Zheng, Dequan
    Zhu, Conghui
    [J]. INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, 2013, 28 (03) : 242 - 256
  • [49] A text-based visual context modulation neural model for multimodal machine translation
    Kwon, Soonmo
    Go, Byung-Hyun
    Lee, Jong-Hyeok
    [J]. PATTERN RECOGNITION LETTERS, 2020, 136 : 212 - 218
  • [50] Modeling Motion with Multi-Modal Features for Text-Based Video Segmentation
    Zhao, Wangbo
    Wang, Kai
    Chu, Xiangxiang
    Xue, Fuzhao
    Wang, Xinchao
    You, Yang
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 11727 - 11736