共 50 条
- [44] CloudLeak: Large-Scale Deep Learning Models Stealing Through Adversarial Examples 27TH ANNUAL NETWORK AND DISTRIBUTED SYSTEM SECURITY SYMPOSIUM (NDSS 2020), 2020,
- [45] Adversarial Attacks and Defenses in Large Language Models: Old and New Threats PROCEEDINGS ON I CAN'T BELIEVE IT'S NOT BETTER: FAILURE MODES IN THE AGE OF FOUNDATION MODELS AT NEURIPS 2023 WORKSHOPS, 2023, 239 : 103 - 117
- [46] Analyzing the Use of Large Language Models for Content Moderation with ChatGPT Examples PROCEEDINGS OF THE 2023 WORKSHOP ON OPEN CHALLENGES IN ONLINE SOCIAL NETWORKS, OASIS 2023/ 34TH ACM CONFERENCE ON HYPERTEXT AND SOCIAL MEDIA, HT 2023, 2023, : 1 - 8
- [47] Evolving Interpretable Visual Classifiers with Large Language Models COMPUTER VISION - ECCV 2024, PT LXIV, 2025, 15122 : 183 - 201
- [49] Generating transferable adversarial examples based on perceptually-aligned perturbation International Journal of Machine Learning and Cybernetics, 2021, 12 : 3295 - 3307