ChannelNets: Compact and Efficient Convolutional Neural Networks via Channel-Wise Convolutions

被引:36
|
作者
Gao, Hongyang [1 ]
Wang, Zhengyang [1 ]
Cai, Lei [2 ]
Ji, Shuiwang [1 ]
机构
[1] Texas A&M Univ, Dept Comp Sci & Engn, College Stn, TX 77843 USA
[2] Washington State Univ, Sch Elect Engn & Comp Sci, Pullman, WA 99164 USA
基金
美国国家科学基金会;
关键词
Convolutional codes; Image coding; Computational modeling; Kernel; Computational efficiency; Mobile handsets; Computer architecture; Deep learning; group convolution; channel-wise convolution; convolutional classification; model compression;
D O I
10.1109/TPAMI.2020.2975796
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Convolutional neural networks (CNNs) have shown great capability of solving various artificial intelligence tasks. However, the increasing model size has raised challenges in employing them in resource-limited applications. In this work, we propose to compress deep models by using channel-wise convolutions, which replace dense connections among feature maps with sparse ones in CNNs. Based on this novel operation, we build light-weight CNNs known as ChannelNets. ChannelNets use three instances of channel-wise convolutions; namely group channel-wise convolutions, depth-wise separable channel-wise convolutions, and the convolutional classification layer. Compared to prior CNNs designed for mobile devices, ChannelNets achieve a significant reduction in terms of the number of parameters and computational cost without loss in accuracy. Notably, our work represents an attempt to compress the fully-connected classification layer, which usually accounts for about 25 percent of total parameters in compact CNNs. Along this new direction, we investigate the behavior of our proposed convolutional classification layer and conduct detailed analysis. Based on our in-depth analysis, we further propose convolutional classification layers without weight-sharing. This new classification layer achieves a good trade-off between fully-connected classification layers and the convolutional classification layer. Experimental results on the ImageNet dataset demonstrate that ChannelNets achieve consistently better performance compared to prior methods.
引用
收藏
页码:2570 / 2581
页数:12
相关论文
共 50 条
  • [41] Combining channel-wise joint attention and temporal attention in graph convolutional networks for skeleton-based action recognition
    Zhonghua Sun
    Tianyi Wang
    Meng Dai
    Signal, Image and Video Processing, 2023, 17 : 2481 - 2488
  • [42] Deep saliency detection via channel-wise hierarchical feature responses
    Li, Cuiping
    Chen, Zhenxue
    Wu, Q. M. Jonathan
    Liu, Chengyun
    NEUROCOMPUTING, 2018, 322 : 80 - 92
  • [43] Combining channel-wise joint attention and temporal attention in graph convolutional networks for skeleton-based action recognition
    Sun, Zhonghua
    Wang, Tianyi
    Dai, Meng
    SIGNAL IMAGE AND VIDEO PROCESSING, 2023, 17 (05) : 2481 - 2488
  • [44] Batch Inference on Deep Convolutional Neural Networks With Fully Homomorphic Encryption Using Channel-By-Channel Convolutions
    Cheon, Jung Hee
    Kang, Minsik
    Kim, Taeseong
    Jung, Junyoung
    Yeo, Yongdong
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2025, 22 (02) : 1674 - 1685
  • [45] Simple and Generic Framework for Feature Distillation via Channel-wise Transformation
    Liu, Ziwei
    Wang, Yongtao
    Chu, Xiaojie
    Dong, Nan
    Qi, Shengxiang
    Ling, Haibin
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS, ICCVW, 2023, : 1121 - 1130
  • [46] Interpreting Convolutional Neural Networks via Layer-Wise Relevance Propagation
    Jia, Wohuan
    Zhang, Shaoshuai
    Jiang, Yue
    Xu, Li
    ARTIFICIAL INTELLIGENCE AND SECURITY, ICAIS 2022, PT I, 2022, 13338 : 457 - 467
  • [47] Channel-Wise Attention Mechanism in the 3D Convolutional Network for Lung Nodule Detection
    Zhu, Xiaoyu
    Wang, Xiaohua
    Shi, Yueting
    Ren, Shiwei
    Wang, Weijiang
    ELECTRONICS, 2022, 11 (10)
  • [48] Low-light Image Enhancement via Channel-wise Intensity Transformation
    Park, Jaemin
    Vien, An Gia
    Lee, Chul
    INTERNATIONAL WORKSHOP ON ADVANCED IMAGING TECHNOLOGY (IWAIT) 2022, 2022, 12177
  • [49] Compressing convolutional neural networks with cheap convolutions and online distillation
    Xie, Jiao
    Lin, Shaohui
    Zhang, Yichen
    Luo, Linkai
    DISPLAYS, 2023, 78
  • [50] SCAR: Spatial-/channel-wise attention regression networks for crowd counting
    Gao, Junyu
    Wang, Qi
    Yuan, Yuan
    NEUROCOMPUTING, 2019, 363 : 1 - 8