IBAttack: Being Cautious about Data Labels

被引:0
|
作者
Agarwal, Akshay [1 ]
Singh, Richa [2 ]
Vatsa, Mayank [2 ]
Ratha, Nalini [3 ]
机构
[1] IISER Bhopal, Department of Data Science and Engineering, Bhopal,462066, India
[2] IIT Jodhpur, Department of Computer Science and Engineering, Rajasthan, Jodhpur,342037, India
[3] University at Buffalo, Department of Computer Science and Electrical Engineering, Buffalo,NY,14260, United States
来源
关键词
Deep learning - Job analysis - Malware - Perturbation techniques;
D O I
10.1109/TAI.2022.3206259
中图分类号
学科分类号
摘要
Traditional backdoor attacks insert a trigger patch in the training images and associate the trigger with the targeted class label. Backdoor attacks are one of the rapidly evolving types of attack which can have a significant impact. On the other hand, adversarial perturbations have a significantly different attack mechanism from the traditional backdoor corruptions, where an imperceptible noise is learned to fool the deep learning models. In this research, we amalgamate these two concepts and propose a novel imperceptible backdoor attack, termed as the IBAttack, where the adversarial images are associated with the desired target classes. A significant advantage of the adversarial-based proposed backdoor attack is the imperceptibility as compared to the traditional trigger-based mechanism. The proposed adversarial dynamic attack, in contrast to existing attacks, is agnostic to classifiers and trigger patterns. The extensive evaluation using multiple databases and networks illustrates the effectiveness of the proposed attack. © 2020 IEEE.
引用
收藏
页码:1484 / 1493
相关论文
共 50 条