共 7 条
- [1] Fairness-aware Adversarial Perturbation Towards Bias Mitigation for Deployed Deep Models [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 10369 - 10378
- [3] Interpretable Approaches to Detect Bias in Black-Box Models [J]. PROCEEDINGS OF THE 2018 AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY (AIES'18), 2018, : 382 - 383
- [4] One-vs.-One Mitigation of Intersectional Bias: A General Method for Extending Fairness-Aware Binary Classification [J]. NEW TRENDS IN DISRUPTIVE TECHNOLOGIES, TECH ETHICS AND ARTIFICIAL INTELLIGENCE: THE DITTET COLLECTION, 2022, 1410 : 43 - 54
- [5] CERTIFAI: A Common Framework to Provide Explanations and Analyse the Fairness and Robustness of Black-box Models [J]. PROCEEDINGS OF THE 3RD AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY AIES 2020, 2020, : 166 - 172
- [6] A Context-aware Black-box Adversarial Attack for Deep Driving Maneuver Classification Models [J]. 2021 18TH ANNUAL IEEE INTERNATIONAL CONFERENCE ON SENSING, COMMUNICATION, AND NETWORKING (SECON), 2021,