Neural Collapse in Deep Linear Networks: From Balanced to Imbalanced Data

被引:0
|
作者
Dang, Hien [1 ]
Tran, Tho [1 ]
Osher, Stanley [2 ]
Tran-The, Hung [3 ]
Ho, Nhat [4 ]
Nguyen, Tan [5 ]
机构
[1] FPT Software AI Ctr, Ho Chi Minh City, Vietnam
[2] Univ Calif Los Angeles, Dept Math, Los Angeles, CA USA
[3] Deakin Univ, Appl Artificial Intelligence Inst, Geelong, Vic, Australia
[4] Univ Texas Austin, Dept Stat & Data Sci, Austin, TX USA
[5] Natl Univ Singapore, Dept Math, Singapore, Singapore
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Modern deep neural networks have achieved impressive performance on tasks from image classification to natural language processing. Surprisingly, these complex systems with massive amounts of parameters exhibit the same structural properties in their last-layer features and classifiers across canonical datasets when training until convergence. In particular, it has been observed that the last-layer features collapse to their class-means, and those class-means are the vertices of a simplex Equiangular Tight Frame (ETF). This phenomenon is known as Neural Collapse (NC). Recent papers have theoretically shown that NC emerges in the global minimizers of training problems with the simplified "unconstrained feature model". In this context, we take a step further and prove the NC occurrences in deep linear networks for the popular mean squared error (MSE) and cross entropy (CE) losses, showing that global solutions exhibit NC properties across the linear layers. Furthermore, we extend our study to imbalanced data for MSE loss and present the first geometric analysis of NC under bias-free setting. Our results demonstrate the convergence of the last-layer features and classifiers to a geometry consisting of orthogonal vectors, whose lengths depend on the amount of data in their corresponding classes. Finally, we empirically validate our theoretical analyses on synthetic and practical network architectures with both balanced and imbalanced scenarios.
引用
收藏
页数:75
相关论文
共 50 条
  • [21] Neural Networks Learn Specified Information for Imbalanced Data Classification
    Huang, Zhan Ao
    Sang, Yongsheng
    Sun, Yanan
    Lv, Jiancheng
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (11) : 6719 - 6730
  • [22] EMSGD: An Improved Learning Algorithm of Neural Networks With Imbalanced Data
    Qian Ya-Guan
    Ma Jun
    Zhang Xi-Min
    Pan Jun
    Zhou Wu-Jie
    Wu Shu-Hui
    Yun Ben-Sheng
    Lei Jing-Sheng
    IEEE ACCESS, 2020, 8 : 64086 - 64098
  • [23] Classification of imbalanced remote-sensing data by neural networks
    Bruzzone, L
    Serpico, SB
    PATTERN RECOGNITION LETTERS, 1997, 18 (11-13) : 1323 - 1328
  • [24] Towards Effective Classification of Imbalanced Data with Convolutional Neural Networks
    Raj, Vidwath
    Magg, Sven
    Wermter, Stefan
    ARTIFICIAL NEURAL NETWORKS IN PATTERN RECOGNITION, 2016, 9896 : 150 - 162
  • [25] Dynamic Sampling in Convolutional Neural Networks for Imbalanced Data Classification
    Pouyanfar, Samira
    Tao, Yudong
    Mohan, Anup
    Tian, Haiman
    Kaseb, Ahmed S.
    Gauen, Kent
    Dailey, Ryan
    Aghajanzadeh, Sarah
    Lu, Yung-Hsiang
    Chen, Shu-Ching
    Shyu, Mei-Ling
    IEEE 1ST CONFERENCE ON MULTIMEDIA INFORMATION PROCESSING AND RETRIEVAL (MIPR 2018), 2018, : 112 - 117
  • [26] Inducing Neural Collapse in Imbalanced Learning: DoWe Really Need a Learnable Classifier at the End of Deep Neural Network?
    Yang, Yibo
    Chen, Shixiang
    Li, Xiangtai
    Xie, Liang
    Lin, Zhouchen
    Tao, Dacheng
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [27] Balanced Mini-batch Training for Imbalanced Image Data Classification with Neural Network
    Shimizu, Ryota
    Asako, Kosuke
    Ojima, Hiroki
    Morinaga, Shohei
    Hamada, Mototsugu
    Kuroda, Tadahiro
    2018 FIRST IEEE INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE FOR INDUSTRIES (AI4I 2018), 2018, : 27 - 30
  • [28] Piecewise linear neural networks and deep learning
    Qinghua Tao
    Li Li
    Xiaolin Huang
    Xiangming Xi
    Shuning Wang
    Johan A. K. Suykens
    Nature Reviews Methods Primers, 2
  • [30] Piecewise linear neural networks and deep learning
    Tao, Qinghua
    Li, Li
    Huang, Xiaolin
    Xi, Xiangming
    Wang, Shuning
    Suykens, Johan A. K.
    NATURE REVIEWS METHODS PRIMERS, 2022, 2 (01):