Emergence of Shape Bias in Convolutional Neural Networks through Activation Sparsity

被引:0
|
作者
Li, Tianqin [1 ]
Wen, Ziqi [1 ]
Li, Yangfan [2 ]
Lee, Tai Sing [1 ]
机构
[1] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
[2] Nortwestern Univ, Evanston, IL USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Current deep-learning models for object recognition are known to be heavily biased toward texture. In contrast, human visual systems are known to be biased toward shape and structure. What could be the design principles in human visual systems that led to this difference? How could we introduce more shape bias into the deep learning models? In this paper, we report that sparse coding, a ubiquitous principle in the brain, can in itself introduce shape bias into the network. We found that enforcing the sparse coding constraint using a non-differential Top-K operation can lead to the emergence of structural encoding in neurons in convolutional neural networks, resulting in a smooth decomposition of objects into parts and subparts and endowing the networks with shape bias. We demonstrated this emergence of shape bias and its functional benefits for different network structures with various datasets. For object recognition convolutional neural networks, the shape bias leads to greater robustness against style and pattern change distraction. For the image synthesis generative adversary networks, the emerged shape bias leads to more coherent and decomposable structures in the synthesized images. Ablation studies suggest that sparse codes tend to encode structures, whereas the more distributed codes tend to favor texture. Our code is host at the github repository: https://topk-shape-bias.github.io/
引用
收藏
页数:12
相关论文
共 50 条
  • [31] Grad Centroid Activation Mapping for Convolutional Neural Networks
    Lafabregue, Baptiste
    Weber, Jonathan
    Gancarski, Pierre
    Forestier, Germain
    2021 IEEE 33RD INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI 2021), 2021, : 184 - 191
  • [32] Convolutional neural networks rarely learn shape for semantic segmentation
    Zhang, Yixin
    Mazurowski, Maciej A.
    PATTERN RECOGNITION, 2024, 146
  • [33] Sparse Ternary Connect: Convolutional Neural Networks Using Ternarized Weights with Enhanced Sparsity
    Jin, Canran
    Sun, Heming
    Kimura, Shinji
    2018 23RD ASIA AND SOUTH PACIFIC DESIGN AUTOMATION CONFERENCE (ASP-DAC), 2018, : 190 - 195
  • [34] Promoting the Harmony between Sparsity and Regularity: A Relaxed Synchronous Architecture for Convolutional Neural Networks
    Lu, Wenyan
    Yan, Guihai
    Li, Jiajun
    Gong, Shijun
    Jiang, Shuhao
    Wu, Jingya
    Li, Xiaowei
    IEEE TRANSACTIONS ON COMPUTERS, 2019, 68 (06) : 867 - 881
  • [35] Layer sparsity in neural networks
    Hebiri, Mohamed
    Lederer, Johannes
    Taheri, Mahsa
    JOURNAL OF STATISTICAL PLANNING AND INFERENCE, 2025, 234
  • [36] Examining Gender Bias of Convolutional Neural Networks via Facial Recognition
    Gwyn, Tony
    Roy, Kaushik
    FUTURE INTERNET, 2022, 14 (12)
  • [37] EMERGENCE OF SPARSITY AND MOTIFS IN GENE REGULATORY NETWORKS
    Zagorski, Marcin
    SUMMER SOLSTICE 2011 INTERNATIONAL CONFERENCE ON DISCRETE MODELS OF COMPLEX SYSTEMS, 2012, 5 (01): : 171 - 180
  • [38] Evolving Convolutional Neural Networks through Grammatical Evolution
    Lima, Ricardo H. R.
    Pozo, Aurora T. R.
    PROCEEDINGS OF THE 2019 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE COMPANION (GECCCO'19 COMPANION), 2019, : 179 - 180
  • [39] SAR IMAGE DESPECKLING THROUGH CONVOLUTIONAL NEURAL NETWORKS
    Chierchia, G.
    Cozzolino, D.
    Poggi, G.
    Verdoliva, L.
    2017 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM (IGARSS), 2017, : 5438 - 5441
  • [40] Deprivation pockets through the lens of convolutional neural networks
    Wang, Jiong
    Kuffer, Monika
    Roy, Debraj
    Pfeffer, Karin
    REMOTE SENSING OF ENVIRONMENT, 2019, 234