ChannelNets: Compact and Efficient Convolutional Neural Networks via Channel-Wise Convolutions

被引:36
|
作者
Gao, Hongyang [1 ]
Wang, Zhengyang [1 ]
Cai, Lei [2 ]
Ji, Shuiwang [1 ]
机构
[1] Texas A&M Univ, Dept Comp Sci & Engn, College Stn, TX 77843 USA
[2] Washington State Univ, Sch Elect Engn & Comp Sci, Pullman, WA 99164 USA
基金
美国国家科学基金会;
关键词
Convolutional codes; Image coding; Computational modeling; Kernel; Computational efficiency; Mobile handsets; Computer architecture; Deep learning; group convolution; channel-wise convolution; convolutional classification; model compression;
D O I
10.1109/TPAMI.2020.2975796
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Convolutional neural networks (CNNs) have shown great capability of solving various artificial intelligence tasks. However, the increasing model size has raised challenges in employing them in resource-limited applications. In this work, we propose to compress deep models by using channel-wise convolutions, which replace dense connections among feature maps with sparse ones in CNNs. Based on this novel operation, we build light-weight CNNs known as ChannelNets. ChannelNets use three instances of channel-wise convolutions; namely group channel-wise convolutions, depth-wise separable channel-wise convolutions, and the convolutional classification layer. Compared to prior CNNs designed for mobile devices, ChannelNets achieve a significant reduction in terms of the number of parameters and computational cost without loss in accuracy. Notably, our work represents an attempt to compress the fully-connected classification layer, which usually accounts for about 25 percent of total parameters in compact CNNs. Along this new direction, we investigate the behavior of our proposed convolutional classification layer and conduct detailed analysis. Based on our in-depth analysis, we further propose convolutional classification layers without weight-sharing. This new classification layer achieves a good trade-off between fully-connected classification layers and the convolutional classification layer. Experimental results on the ImageNet dataset demonstrate that ChannelNets achieve consistently better performance compared to prior methods.
引用
收藏
页码:2570 / 2581
页数:12
相关论文
共 50 条
  • [31] Refine or Represent: Residual Networks with Explicit Channel-wise Configuration
    Shen, Yanyan
    Gao, Jinyang
    PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2018, : 2682 - 2688
  • [32] DiCENet: Dimension-Wise Convolutions for Efficient Networks
    Mehta, Sachin
    Hajishirzi, Hannaneh
    Rastegari, Mohammad
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (05) : 2416 - 2425
  • [33] MRI RECONSTRUCTION VIA CASCADED CHANNEL-WISE ATTENTION NETWORK
    Huang, Qiaoying
    Yang, Dong
    Wu, Pengxiang
    Qu, Hui
    Yi, Jingru
    Metaxas, Dimitris
    2019 IEEE 16TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI 2019), 2019, : 1622 - 1626
  • [34] Layer-Wise Training to Create Efficient Convolutional Neural Networks
    Zeng, Linghua
    Tian, Xinmei
    NEURAL INFORMATION PROCESSING (ICONIP 2017), PT II, 2017, 10635 : 631 - 641
  • [35] Deep metric learning via group channel-wise ensemble
    Li, Ping
    Zhao, Guopan
    Chen, Jiajun
    Xu, Xianghua
    KNOWLEDGE-BASED SYSTEMS, 2023, 259
  • [36] Deep metric learning via group channel-wise ensemble
    Li, Ping
    Zhao, Guopan
    Chen, Jiajun
    Xu, Xianghua
    KNOWLEDGE-BASED SYSTEMS, 2023, 259
  • [37] Take CARE: Improving Inherent Robustness of Spiking Neural Networks with Channel-wise Activation Recalibration Module
    Zhang, Yan
    Chen, Cheng
    Shen, Dian
    Wang, Meng
    Wang, Beilun
    23RD IEEE INTERNATIONAL CONFERENCE ON DATA MINING, ICDM 2023, 2023, : 828 - 837
  • [38] DCAN: Dual Channel-Wise Alignment Networks for Unsupervised Scene Adaptation
    Wu, Zuxuan
    Han, Xintong
    Lin, Yen-Liang
    Uzunbas, Mustafa Gokhan
    Goldstein, Tom
    Lim, Ser Nam
    Davis, Larry S.
    COMPUTER VISION - ECCV 2018, PT V, 2018, 11209 : 535 - 552
  • [39] Underwater image enhancement via a channel-wise transmission estimation network
    Wang, Qiang
    Fu, Bo
    Fan, Huijie
    IET IMAGE PROCESSING, 2023, 17 (10) : 2958 - 2971
  • [40] Motor Imagery Classification Using Inter-Task Transfer Learning via a Channel-Wise Variational Autoencoder-Based Convolutional Neural Network
    Lee, Do-Yeun
    Jeong, Ji-Hoon
    Lee, Byeong-Hoo
    Lee, Seong-Whan
    IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, 2022, 30 : 226 - 237