共 50 条
- [31] EncoderMI: Membership Inference against Pre-trained Encoders in Contrastive Learning [J]. CCS '21: PROCEEDINGS OF THE 2021 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2021, : 2081 - 2095
- [33] Interpretabilty of Speech Emotion Recognition modelled using Self-Supervised Speech and Text Pre-Trained Embeddings [J]. INTERSPEECH 2022, 2022, : 4496 - 4500
- [34] SSCLNet: A Self-Supervised Contrastive Loss-Based Pre-Trained Network for Brain MRI Classification [J]. IEEE ACCESS, 2023, 11 : 6673 - 6681
- [35] Defending Pre-trained Language Models as Few-shot Learners against Backdoor Attacks [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
- [37] Backdoor Pre-trained Models Can Transfer to All [J]. CCS '21: PROCEEDINGS OF THE 2021 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2021, : 3141 - 3158
- [38] Microstructure segmentation with deep learning encoders pre-trained on a large microscopy dataset [J]. npj Computational Materials, 8
- [40] Training Set Cleansing of Backdoor Poisoning by Self-Supervised Representation Learning [J]. ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, 2023,