Loss Minimization Yields Multicalibration for Large Neural Networks

被引:0
|
作者
Blasiok, Jaroslaw [1 ]
Gopalan, Parikshit [2 ]
Hu, Lunjia [3 ]
Kalai, Adam Tauman [4 ]
Nakkiran, Preetum [2 ]
机构
[1] Swiss Fed Inst Technol, Zurich, Switzerland
[2] Apple, Palo Alto, CA USA
[3] Stanford Univ, Stanford, CA 94305 USA
[4] Microsoft Res, Cambridge, MA USA
来源
15TH INNOVATIONS IN THEORETICAL COMPUTER SCIENCE CONFERENCE, ITCS 2024 | 2024年
关键词
Multi-group fairness; loss minimization; neural networks;
D O I
10.4230/LIPIcs.ITCS.2024.17
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Multicalibration is a notion of fairness for predictors that requires them to provide calibrated predictions across a large set of protected groups. Multicalibration is known to be a distinct goal than loss minimization, even for simple predictors such as linear functions. In this work, we consider the setting where the protected groups can be represented by neural networks of size k, and the predictors are neural networks of size n > k. We show that minimizing the squared loss over all neural nets of size n implies multicalibration for all but a bounded number of unlucky values of n. We also give evidence that our bound on the number of unlucky values is tight, given our proof technique. Previously, results of the flavor that loss minimization yields multicalibration were known only for predictors that were near the ground truth, hence were rather limited in applicability. Unlike these, our results rely on the expressivity of neural nets and utilize the representation of the predictor.
引用
收藏
页数:21
相关论文
共 50 条
  • [21] Distributed Deep Neural Networks with System Cost Minimization in Fog Networks
    Shah, Syed Danial Ali
    Zhao, Hong Ping
    Kim, Hoon
    PROCEEDINGS OF TENCON 2018 - 2018 IEEE REGION 10 CONFERENCE, 2018, : 1193 - 1196
  • [22] Structure minimization using the impact factor in neural networks
    Kap-Ho Seo
    Jae-Su Song
    Ju-Jang Lee
    Artificial Life and Robotics, 2002, 6 (3) : 149 - 154
  • [23] On Genetic Algorithms and Neural Networks for Boolean Functions Minimization
    Kazimirov, A. S.
    Reimerov, S. Y.
    PROCEEDINGS OF THE XIX IEEE INTERNATIONAL CONFERENCE ON SOFT COMPUTING AND MEASUREMENTS (SCM 2016), 2016, : 260 - 261
  • [24] Minimization of the micromotion of trapped ions with artificial neural networks
    Liu, Yang
    Lao, Qi-feng
    Lu, Peng-fei
    Rao, Xin-xin
    Wu, Hao
    Liu, Teng
    Wang, Kun-xu
    Wang, Zhao
    Li, Ming-shen
    Zhu, Feng
    Luo, Le
    APPLIED PHYSICS LETTERS, 2021, 119 (13)
  • [25] Minimization learning of neural networks by adding hidden units
    Niijima, Koichi
    Yamada, Masaaki
    Mohamed, Marghny H.
    Akanuma, Taiga
    Minamoto, Teruya
    Ohkubo, Akito
    Research Reports on Information Science and Electrical Engineering of Kyushu University, 1997, 2 (02): : 173 - 178
  • [26] Modelling of Pyrolysis Product Yields by Artificial Neural Networks
    Merdun, Hasan
    Sezgin, Ismail Veli
    INTERNATIONAL JOURNAL OF RENEWABLE ENERGY RESEARCH, 2018, 8 (02): : 1178 - 1188
  • [27] Regression of Large-Scale Path Loss Parameters Using Deep Neural Networks
    Bal, Mustafa
    Marey, Ahmed
    Ates, Hasan F.
    Baykas, Tuncer
    Gunturk, Bahadir K.
    IEEE ANTENNAS AND WIRELESS PROPAGATION LETTERS, 2022, 21 (08): : 1562 - 1566
  • [28] Fast Connectivity Minimization on Large-Scale Networks
    Chen, Chen
    Peng, Ruiyue
    Ying, Lei
    Tong, Hanghang
    ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA, 2021, 15 (03)
  • [29] Learning in large neural networks
    Anguita, D
    Passaggio, F
    Zunino, R
    HIGH-PERFORMANCE COMPUTING AND NETWORKING, 1995, 919 : 269 - 274
  • [30] Large Neural Networks at a Fraction
    Mukhopadhyay, Aritra
    Ansad, Adhilsha
    Mishra, Subhankar
    NORTHERN LIGHTS DEEP LEARNING CONFERENCE, VOL 233, 2024, 233 : 165 - 173