Augment CAPTCHA Security Using Adversarial Examples With Neural Style Transfer

被引:1
|
作者
Dinh, Nghia [1 ]
Tran-Trung, Kiet [2 ]
Hoang, Vinh Truong [2 ]
机构
[1] VSB Tech Univ Ostrava, Fac Elect Engn & Comp Sci, Ostrava 70833, Czech Republic
[2] Ho Chi Minh City Open Univ, Fac Comp Sci, Ho Chi Minh 722000, Vietnam
来源
IEEE ACCESS | 2023年 / 11卷
关键词
Machine learning; CNN; DNN; CAPTCHA; security; adversarial examples; cognitive;
D O I
10.1109/ACCESS.2023.3298442
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
To counteract rising bots, many CAPTCHAs (Completely Automated Public Turing tests to tell Computers and Humans Apart) have been developed throughout the years. Automated attacks, however, employing powerful deep learning techniques, have had high success rates over common CAPTCHAs, including image-based and text-based CAPTCHAs. Optimistically, introducing imperceptible noise, Adversarial Examples have lately been shown to particularly impact DNN (Deep Neural Network) networks. The authors improved the CAPTCHA security architecture by increasing the resilience of Adversarial Examples when combined with Neural Style Transfer. The findings demonstrated that the proposed approach considerably improves the security of ordinary CAPTCHAs.
引用
收藏
页码:83553 / 83561
页数:9
相关论文
共 50 条
  • [41] Neural style transfer generative adversarial network (NST-GAN) for facial expression recognition
    Khemakhem, Faten
    Ltifi, Hela
    INTERNATIONAL JOURNAL OF MULTIMEDIA INFORMATION RETRIEVAL, 2023, 12 (02)
  • [42] Exploring adversarial examples and adversarial robustness of convolutional neural networks by mutual information
    Zhang J.
    Qian W.
    Cao J.
    Xu D.
    Neural Computing and Applications, 2024, 36 (23) : 14379 - 14394
  • [43] On the Robustness to Adversarial Examples of Neural ODE Image Classifiers
    Carrara, Fabio
    Caldelli, Roberto
    Falchi, Fabrizio
    Amato, Giuseppe
    2019 IEEE INTERNATIONAL WORKSHOP ON INFORMATION FORENSICS AND SECURITY (WIFS), 2019,
  • [44] Interpretability Analysis of Deep Neural Networks With Adversarial Examples
    Dong Y.-P.
    Su H.
    Zhu J.
    Zidonghua Xuebao/Acta Automatica Sinica, 2022, 48 (01): : 75 - 86
  • [45] Compound adversarial examples in deep neural networks q
    Li, Yanchun
    Li, Zhetao
    Zeng, Li
    Long, Saiqin
    Huang, Feiran
    Ren, Kui
    INFORMATION SCIENCES, 2022, 613 : 50 - 68
  • [46] Audio Adversarial Examples Generation with Recurrent Neural Networks
    Chang, Kuei-Huan
    Huang, Po-Hao
    Yu, Honggang
    Jin, Yier
    Wang, Ting-Chi
    2020 25TH ASIA AND SOUTH PACIFIC DESIGN AUTOMATION CONFERENCE, ASP-DAC 2020, 2020, : 488 - 493
  • [47] A Reinforced Generation of Adversarial Examples for Neural Machine Translation
    Zou, Wei
    Huang, Shujian
    Xie, Jun
    Dai, Xinyu
    Chen, Jiajun
    58TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2020), 2020, : 3486 - 3497
  • [48] Enhancing the Security of Deep Learning Steganography via Adversarial Examples
    Shang, Yueyun
    Jiang, Shunzhi
    Ye, Dengpan
    Huang, Jiaqing
    MATHEMATICS, 2020, 8 (09)
  • [49] Assessing Threat of Adversarial Examples on Deep Neural Networks
    Graese, Abigail
    Rozsa, Andras
    Boult, Terrance E.
    2016 15TH IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA 2016), 2016, : 69 - 74
  • [50] Watermarking of Deep Recurrent Neural Network Using Adversarial Examples to Protect Intellectual Property
    Rathi, Pulkit
    Bhadauria, Saumya
    Rathi, Sugandha
    APPLIED ARTIFICIAL INTELLIGENCE, 2022, 36 (01)