Survey on Reverse-engineering Artificial Intelligence

被引:0
|
作者
Li C.-S. [1 ]
Wang S.-Y. [1 ]
Li Y.-M. [1 ]
Zhang C.-Z. [1 ]
Yuan Y. [1 ]
Wang G.-R. [1 ]
机构
[1] School of Computer Science and Technology, Beijing Institute of Technology, Beijing
来源
Ruan Jian Xue Bao/Journal of Software | 2023年 / 34卷 / 02期
关键词
artificial intelligence security; defect analysis; reverse recovery; reverse-engineering artificial intelligence;
D O I
10.13328/j.cnki.jos.006699
中图分类号
学科分类号
摘要
In the era of big data, artificial intelligence, especially the representative technologies of machine learning and deep learning, has made great progress in recent years. As artificial intelligence has been widely used to various real-world applications, the security and privacy problems of artificial intelligence is gradually exposed, and has attracted increasing attention in academic and industry communities. Researchers have proposed many works focusing on solving the security and privacy issues of machine learning from the perspective of attack and defense. However, current methods on the security issue of machine learning lack of the complete theory framework and system framework. This survey summarizes and analyzes the reverse recovery of training data and model structure, the defect of the model, and gives the formal definition and classification system of reverse-engineering artificial intelligence. In the meantime, this survey summarizes the progress of machine learning security on the basis of reverse-engineering artificial intelligence, where the security of machine learning can be taken as an application. Finally, the current challenges and future research directions of reverse-engineering artificial intelligence are discussed, while building the theory framework of reverse-engineering artificial intelligence can promote the develop of artificial intelligence in a healthy way. © 2023 Chinese Academy of Sciences. All rights reserved.
引用
收藏
页码:712 / 732
页数:20
相关论文
共 138 条
  • [1] Bishop CM., Pattern Recognition and Machine Learning, (2006)
  • [2] Xu H, Ma Y, Liu HC, Debayan D, Liu H, Tang JL, Jain Anil K., Adversarial attacks and defenses in images, graphs and text: A review, Int’l Journal of Automation and Computing, 17, 2, pp. 151-178, (2020)
  • [3] Zhang CN, Philipp B, Lin CG, Adil K, Wu J, Kweon, A survey on universal adversarial attack, (2021)
  • [4] Milad N, Shokri R, Houmansadr A., Machine learning with membership privacy using adversarial regularization, Proc. of the ACM SIGSAC Conf. on Computer and Communications Security, pp. 634-646, (2018)
  • [5] Reza S, Stronati M, Song CZ, Shmatikov V., Membership inference attacks against machine learning models, Proc. of the 2017 IEEE Symp. on Security and Privacy (SP), pp. 3-18, (2017)
  • [6] Luca M, Song CZ, De Cristofaro E, Shmatikov V., Exploiting unintended feature leakage in collaborative learning, Proc. of the 2019 IEEE Symp. on Security and Privacy (SP), pp. 691-706, (2019)
  • [7] Song L, Shokri R, Mittal P., Membership inference attacks against adversarially robust deep learning models, Proc. of the 2019 IEEE Security and Privacy Workshops (SPW), pp. 50-56, (2019)
  • [8] Samuel Y, Giacomelli I, Fredrikson M, Jha S., Privacy risk in machine learning: Analyzing the connection to overfitting, Proc. of the 31st IEEE Computer Security Foundations Symp. (CSF), pp. 268-282, (2018)
  • [9] Christopher A Christopher A, Choo Choquette, Tramer F, Carlini N, Papernot N., Label-only membership inference attacks, Proc. of the Int’l Conf. on Machine Learning, pp. 1964-1974, (2021)
  • [10] Truex S, Liu L, Gursoy ME, Et al., Demystifying membership inference attacks in machine learning as a service, IEEE Trans. on Services Computing, (2019)