Defending Emotional Privacy with Adversarial Machine Learning for Social Good

被引:0
|
作者
Al-Maliki, Shawqi [1 ]
Abdallah, Mohamed [1 ]
Qadir, Junaid [2 ]
Al-Fuqaha, Ala [1 ]
机构
[1] Hamad Bin Khalifa Univ, Informat & Comp Technol ICT Div, Coll Sci & Engn, Doha 34110, Qatar
[2] Qatar Univ, Dept Comp Sci & Engn, Coll Engn, Doha, Qatar
关键词
Evasion Attacks for Good; Emotional-Privacy Preservation; Robust Adversarial ML attacks;
D O I
10.1109/IWCMC58020.2023.10182780
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Protecting the privacy of personal information, including emotions, is essential, and organizations must comply with relevant regulations to ensure privacy. Unfortunately, some organizations do not respect these regulations, or they lack transparency, leaving human privacy at risk. These privacy violations often occur when unauthorized organizations misuse machine learning (ML) technology, such as facial expression recognition (FER) systems. Therefore, researchers and practitioners must take action and use ML technology for social good to protect human privacy. One emerging research area that can help address privacy violations is the use of adversarial ML for social good. Evasion attacks, which are used to fool ML systems, can be repurposed to prevent misused ML technology, such as ML-based FER, from recognizing true emotions. By leveraging adversarial ML for social good, we can prevent organizations from violating human privacy by misusing ML technology, particularly FER systems, and protect individuals' personal and emotional privacy. In this work, we propose an approach called Chaining of Adversarial ML Attacks (CAA) to create a robust attack that fools misused technology and prevents it from detecting true emotions. To validate our proposed approach, we conduct extensive experiments using various evaluation metrics and baselines. Our results show that CAA significantly contributes to emotional privacy preservation, with the fool rate of emotions increasing proportionally to the chaining length. In our experiments, the fool rate increases by 48% in each subsequent chaining stage of the chaining targeted attacks (CTA) while keeping the perturbations imperceptible (epsilon = 0.0001).
引用
收藏
页码:345 / 350
页数:6
相关论文
共 50 条
  • [1] Adversarial Machine Learning for Social Good: Reframing the Adversary as an Ally
    Al-Maliki S.
    Qayyum A.
    Ali H.
    Abdallah M.
    Qadir J.
    Hoang D.T.
    Niyato D.
    Al-Fuqaha A.
    [J]. IEEE Transactions on Artificial Intelligence, 2024, 5 (09): : 1 - 21
  • [2] Machine Learning Integrity and Privacy in Adversarial Environments
    Oprea, Alina
    [J]. PROCEEDINGS OF THE 26TH ACM SYMPOSIUM ON ACCESS CONTROL MODELS AND TECHNOLOGIES, SACMAT 2021, 2021, : 1 - 2
  • [3] Machine Learning with Membership Privacy using Adversarial Regularization
    Nasr, Milad
    Shokri, Reza
    Houmansadr, Amir
    [J]. PROCEEDINGS OF THE 2018 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (CCS'18), 2018, : 634 - 646
  • [4] Adversarial Machine Learning on Social Network: A Survey
    Guo, Sensen
    Li, Xiaoyu
    Mu, Zhiying
    [J]. FRONTIERS IN PHYSICS, 2021, 9
  • [5] Adversary for Social Good: Leveraging Adversarial Attacks to Protect Personal Attribute Privacy
    Li, Xiaoting
    Chen, Lingwei
    Wu, Dinghao
    [J]. ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA, 2024, 18 (02)
  • [6] Privacy Risks of Securing Machine Learning Models against Adversarial Examples
    Song, Liwei
    Shokri, Reza
    Mittal, Prateek
    [J]. PROCEEDINGS OF THE 2019 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (CCS'19), 2019, : 241 - 257
  • [7] Adversary for Social Good: Protecting Familial Privacy through Joint Adversarial Attacks
    Kumar, Chetan
    Ryan, Riazat
    Shao, Ming
    [J]. THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 11304 - 11311
  • [8] Privacy Protection of Grid Users Data with Blockchain and Adversarial Machine Learning
    Yilmaz, Ibrahim
    Kapoor, Kavish
    Siraj, Ambareen
    Abouyoussef, Mahmoud
    [J]. SAT-CPS'21: PROCEEDINGS OF THE 2021 ACM WORKSHOP ON SECURE AND TRUSTWORTHY CYBER-PHYSICAL SYSTEMS, 2021, : 33 - 38
  • [9] Adversarial interference and its mitigations in privacy-preserving collaborative machine learning
    Dmitrii Usynin
    Alexander Ziller
    Marcus Makowski
    Rickmer Braren
    Daniel Rueckert
    Ben Glocker
    Georgios Kaissis
    Jonathan Passerat-Palmbach
    [J]. Nature Machine Intelligence, 2021, 3 : 749 - 758
  • [10] Adversarial interference and its mitigations in privacy-preserving collaborative machine learning
    Usynin, Dmitrii
    Ziller, Alexander
    Makowski, Marcus
    Braren, Rickmer
    Rueckert, Daniel
    Glocker, Ben
    Kaissis, Georgios
    Passerat-Palmbach, Jonathan
    [J]. NATURE MACHINE INTELLIGENCE, 2021, 3 (09) : 749 - 758