Augment CAPTCHA Security Using Adversarial Examples With Neural Style Transfer

被引:1
|
作者
Dinh, Nghia [1 ]
Tran-Trung, Kiet [2 ]
Hoang, Vinh Truong [2 ]
机构
[1] VSB Tech Univ Ostrava, Fac Elect Engn & Comp Sci, Ostrava 70833, Czech Republic
[2] Ho Chi Minh City Open Univ, Fac Comp Sci, Ho Chi Minh 722000, Vietnam
来源
IEEE ACCESS | 2023年 / 11卷
关键词
Machine learning; CNN; DNN; CAPTCHA; security; adversarial examples; cognitive;
D O I
10.1109/ACCESS.2023.3298442
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
To counteract rising bots, many CAPTCHAs (Completely Automated Public Turing tests to tell Computers and Humans Apart) have been developed throughout the years. Automated attacks, however, employing powerful deep learning techniques, have had high success rates over common CAPTCHAs, including image-based and text-based CAPTCHAs. Optimistically, introducing imperceptible noise, Adversarial Examples have lately been shown to particularly impact DNN (Deep Neural Network) networks. The authors improved the CAPTCHA security architecture by increasing the resilience of Adversarial Examples when combined with Neural Style Transfer. The findings demonstrated that the proposed approach considerably improves the security of ordinary CAPTCHAs.
引用
收藏
页码:83553 / 83561
页数:9
相关论文
共 50 条
  • [21] Masked Neural Style Transfer using Convolutional Neural Networks
    Handa, Arushi
    Garg, Prerna
    Khare, Vijay
    2018 INTERNATIONAL CONFERENCE ON RECENT INNOVATIONS IN ELECTRICAL, ELECTRONICS & COMMUNICATION ENGINEERING (ICRIEECE 2018), 2018, : 2099 - 2104
  • [22] Crafting Adversarial Examples for Neural Machine Translation
    Zhang, Xinze
    Zhang, Junzhe
    Chen, Zhenhua
    He, Kun
    59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING, VOL 1 (ACL-IJCNLP 2021), 2021, : 1967 - 1977
  • [23] Deep neural rejection against adversarial examples
    Angelo Sotgiu
    Ambra Demontis
    Marco Melis
    Battista Biggio
    Giorgio Fumera
    Xiaoyi Feng
    Fabio Roli
    EURASIP Journal on Information Security, 2020
  • [24] Robustness of deep neural networks in adversarial examples
    Song, Xiao (songxiao@buaa.edu.cn), 1600, University of Cincinnati (24):
  • [25] Adversarial Examples Detection With Bayesian Neural Network
    Li, Yao
    Tang, Tongyi
    Hsieh, Cho-Jui
    Lee, Thomas C. M.
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2024, 8 (05): : 1 - 11
  • [26] Deep neural rejection against adversarial examples
    Sotgiu, Angelo
    Demontis, Ambra
    Melis, Marco
    Biggio, Battista
    Fumera, Giorgio
    Feng, Xiaoyi
    Roli, Fabio
    EURASIP JOURNAL ON INFORMATION SECURITY, 2020, 2020 (01)
  • [27] ROBUSTNESS OF DEEP NEURAL NETWORKS IN ADVERSARIAL EXAMPLES
    Teng, Da
    Song, Xiao m
    Gong, Guanghong
    Han, Liang
    INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING-THEORY APPLICATIONS AND PRACTICE, 2017, 24 (02): : 123 - 133
  • [28] Unsupervised Generative Adversarial Network for Style Transfer using Multiple Discriminators
    Akhtar, Mohd Rayyan
    Liu, Peng
    THIRTEENTH INTERNATIONAL CONFERENCE ON GRAPHICS AND IMAGE PROCESSING (ICGIP 2021), 2022, 12083
  • [29] Detecting Adversarial Examples - A Lesson from Multimedia Security
    Schoettle, Pascal
    Schloegl, Alexander
    Pasquini, Cecilia
    Boehme, Rainer
    2018 26TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO), 2018, : 947 - 951
  • [30] Adversarial Examples for Edge Detection: They Exist, and They Transfer
    Cosgrove, Christian
    Yuille, Alan L.
    2020 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2020, : 1059 - 1068