Learning fair representations via rebalancing graph structure

被引:8
|
作者
Zhang, Guixian [1 ]
Cheng, Debo [2 ]
Yuan, Guan [1 ]
Zhang, Shichao [3 ]
机构
[1] China Univ Min & Technol, Sch Comp Sci & Technol, Xuzhou 221116, Jiangsu, Peoples R China
[2] Univ South Australia, UniSA Stem, Adelaide, SA 5095, Australia
[3] Guangxi Normal Univ, Guangxi Key Lab Multisource Informat Min & Secur, Guilin 541004, Guangxi, Peoples R China
基金
中国国家自然科学基金; 中国博士后科学基金; 澳大利亚研究理事会;
关键词
Fair representation learning; Graph neural network; Structural rebalancing; Decision-making; Adversarial learning;
D O I
10.1016/j.ipm.2023.103570
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Graph Neural Network (GNN) models have been extensively researched and utilised for extracting valuable insights from graph data. The performance of fairness algorithms based on GNNs depends on the neighbourhood aggregation mechanism during the update process. However, this mechanism may result in the disregard of sensitive attributes pertaining to nodes with low degrees, as well as the disproportionate influence of sensitive attributes associated with high-degree nodes on their neighbouring nodes. To address these limitations, we propose a novel algorithm called Structural Rebalancing Graph Neural Network (SRGNN). SRGNN aims to consider the impact of both low-degree and high-degree nodes in the GNN model for learning fair representations in decision-making. SRGNN first proposes a fair structural rebalancing algorithm to ensure equal status among nodes by reducing the influence of high degree nodes and enhancing the influence of low-degree nodes. Next, SRGNN utilises adversarial learning to learn fair representations, based on gradient normalisation to ensure that each node's representation is separated from sensitive attribute information. We conducted extensive experiments on three real-world datasets to evaluate the performance of SRGNN. The results showed that SRGNN outperformed all existing models in 2 out of 2 fairness metrics.
引用
收藏
页数:15
相关论文
共 50 条
  • [21] Learning Adversarially Fair and Transferable Representations
    Madras, David
    Creager, Elliot
    Pitassi, Toniann
    Zemel, Richard
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80
  • [22] Learning Certified Individually Fair Representations
    Ruoss, Anian
    Balunovic, Mislav
    Fischer, Marc
    Vechev, Martin
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [23] Inherent Tradeoffs in Learning Fair Representations
    Zhao, Han
    Gordon, Geoffrey J.
    Journal of Machine Learning Research, 2022, 23
  • [24] PRIVACY PROTECTION IN LEARNING FAIR REPRESENTATIONS
    Jin, Yulu
    Lai, Lifeng
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 2964 - 2968
  • [25] Learning fair representations for accuracy parity
    Quan, Tangkun
    Zhu, Fei
    Liu, Quan
    Li, Fanzhang
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2023, 119
  • [26] Learning Fair Representations for Kernel Models
    Tan, Zilong
    Yeom, Samuel
    Fredrikson, Matt
    Talwalkar, Ameet
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 108, 2020, 108
  • [27] Learning Flexible and Fair Data Representations
    Usama, Muhammad
    Chang, Dong Eui
    IEEE ACCESS, 2022, 10 : 99235 - 99242
  • [28] Inherent Tradeoffs in Learning Fair Representations
    Zhao, Han
    Gordon, Geoffrey J.
    JOURNAL OF MACHINE LEARNING RESEARCH, 2022, 23 : 1 - 26
  • [29] Dimensionality Reduction via Graph Structure Learning
    Mao, Qi
    Wang, Li
    Goodison, Steve
    Sun, Yijun
    KDD'15: PROCEEDINGS OF THE 21ST ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2015, : 765 - 774
  • [30] Learning Invariant Representations of Graph Neural Networks via Cluster Generalization
    Xia, Donglin
    Wang, Xiao
    Liu, Nian
    Shi, Chuan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,