Universal and Succinct Source Coding of Deep Neural Networks

被引:0
|
作者
Basu S. [1 ,2 ]
Varshney L.R. [2 ]
机构
[1] Indian Institute of Technology Kanpur, Department of Electrical Engineering, Kanpur
[2] University of Illinois at Urbana-Champaign, Coordinated Science Laboratory, Department of Electrical and Computer Engineering, Urbana, 61801, IL
关键词
artificial neural networks; entropy coding; Neural network compression; source coding;
D O I
10.1109/JSAIT.2023.3261819
中图分类号
学科分类号
摘要
Deep neural networks have shown incredible performance for inference tasks in a variety of domains, but require significant storage space, which limits scaling and use for on-device intelligence. This paper is concerned with finding universal lossless compressed representations of deep feedforward networks with synaptic weights drawn from discrete sets, and directly performing inference without full decompression. The basic insight that allows less rate than naïve approaches is recognizing that the bipartite graph layers of feedforward networks have a kind of permutation invariance to the labeling of nodes, in terms of inferential operation. We provide efficient algorithms to dissipate this irrelevant uncertainty and then use arithmetic coding to nearly achieve the entropy bound in a universal manner. We also provide experimental results of our approach on several standard datasets. © 2020 IEEE.
引用
收藏
页码:732 / 745
页数:13
相关论文
共 50 条
  • [1] Universal Source Coding of Deep Neural Networks
    Basu, Sourya
    Varshney, Lav R.
    2017 DATA COMPRESSION CONFERENCE (DCC), 2017, : 310 - 319
  • [2] REPLICATOR NEURAL NETWORKS FOR UNIVERSAL OPTIMAL SOURCE-CODING
    HECHTNIELSEN, R
    SCIENCE, 1995, 269 (5232) : 1860 - 1863
  • [3] Universal Consistency of Deep Convolutional Neural Networks
    Lin, Shao-Bo
    Wang, Kaidong
    Wang, Yao
    Zhou, Ding-Xuan
    IEEE TRANSACTIONS ON INFORMATION THEORY, 2022, 68 (07) : 4610 - 4617
  • [4] A Review on Deep Neural Networks for ICD Coding
    Teng, Fei
    Liu, Yiming
    Li, Tianrui
    Zhang, Yi
    Li, Shuangqing
    Zhao, Yue
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (05) : 4357 - 4375
  • [5] Improve Robustness of Deep Neural Networks by Coding
    Huang, Kunping
    Raviv, Netanel
    Jain, Siddharth
    Upadhyaya, Pulakesh
    Bruck, Jehoshua
    Siegel, Paul H.
    Jiang, Anxiao
    2020 INFORMATION THEORY AND APPLICATIONS WORKSHOP (ITA), 2020,
  • [6] TDSNN: From Deep Neural Networks to Deep Spike Neural Networks with Temporal-Coding
    Zhang, Lei
    Zhou, Shengyuan
    Zhi, Tian
    Du, Zidong
    Chen, Yunji
    THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 1319 - 1326
  • [7] Universal Source Coding Over Generalized Complementary Delivery Networks
    Kimura, Akisato
    Uyematsu, Tomohiko
    Kuzuoka, Shigeaki
    Watanabe, Shun
    IEEE TRANSACTIONS ON INFORMATION THEORY, 2009, 55 (03) : 1360 - 1373
  • [8] Variable Length Joint Source-Channel Coding of Text Using Deep Neural Networks
    Rao, Milind
    Farsad, Nariman
    Goldsmith, Andrea
    2018 IEEE 19TH INTERNATIONAL WORKSHOP ON SIGNAL PROCESSING ADVANCES IN WIRELESS COMMUNICATIONS (SPAWC), 2018, : 81 - 85
  • [9] Universal Approximation Property of Hamiltonian Deep Neural Networks
    Zakwan, Muhammad
    d'Angelo, Massimiliano
    Ferrari-Trecate, Giancarlo
    IEEE CONTROL SYSTEMS LETTERS, 2023, 7 : 2689 - 2694
  • [10] Generalizing universal adversarial perturbations for deep neural networks
    Yanghao Zhang
    Wenjie Ruan
    Fu Wang
    Xiaowei Huang
    Machine Learning, 2023, 112 : 1597 - 1626