Optimized separable convolution: Yet another efficient convolution operator

被引:1
|
作者
Wei, Tao [1 ]
Tian, Yonghong [2 ,3 ]
Wang, Yaowei [4 ]
Liang, Yun [5 ]
Chen, Chang Wen [6 ]
机构
[1] Univ Buffalo State Univ New York, CSE Dept, Buffalo, NY 14260 USA
[2] Peking Univ, Sch ECE, Shenzhen, Peoples R China
[3] Peking Univ, Sch CS, Shenzhen, Peoples R China
[4] Pengcheng Lab, Shenzhen, Peoples R China
[5] Peking Univ, Sch Integrated Circuit, Beijing, Peoples R China
[6] Hong Kong Polytech Univ, Dept Comp, Hong Kong, Peoples R China
来源
AI OPEN | 2022年 / 3卷
关键词
Deep neural network; Separable convolution;
D O I
10.1016/j.aiopen.2022.10.002
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The convolution operation is the most critical component in recent surge of deep learning research. Conventional 2D convolution needs O ( C 2 K 2 ) parameters to represent, where C is the channel size and K is the kernel size. The amount of parameters has become really costly considering that these parameters increased tremendously recently to meet the needs of demanding applications. Among various implementations of the convolution, separable convolution has been proven to be more efficient in reducing the model size. For example, depth separable convolution reduces the complexity to O ( C center dot ( C + K 2 )) while spatial separable convolution reduces the complexity to O ( C 2 K ) . However, these are considered ad hoc designs which cannot ensure that they can in general achieve optimal separation. In this research, we propose a novel and principled operator called optimized separable convolution by optimal design for the internal number of groups and kernel 3 2 K ) . When the restriction in sizes for general separable convolutions can achieve the complexity of O ( C the number of separated convolutions can be lifted, an even lower complexity at O ( C center dot log( CK 2 )) can be achieved. Experimental results demonstrate that the proposed optimized separable convolution is able to achieve an improved performance in terms of accuracy-#Params trade-offs over both conventional, depth -wise, and depth/spatial separable convolutions.
引用
收藏
页码:162 / 171
页数:10
相关论文
共 50 条
  • [21] PERTURBATION OF A SURJECTIVE CONVOLUTION OPERATOR
    Musin, I. Kh.
    [J]. UFA MATHEMATICAL JOURNAL, 2016, 8 (04): : 123 - 130
  • [22] CONVOLUTION OPERATOR ON A FINITE INTERVAL
    LJUBARSKII, JI
    [J]. MATHEMATICS OF THE USSR-IZVESTIYA, 1977, 11 (03): : 583 - 611
  • [23] APPROXIMATION AND CONVOLUTION OPERATOR TRANSFER
    LOHOUE, N
    [J]. ANNALES DE L INSTITUT FOURIER, 1976, 26 (04) : 133 - 150
  • [24] CONVOLUTION PRODUCT OF OPERATOR MEASURES
    BERBERIAN, SK
    [J]. ACTA SCIENTIARUM MATHEMATICARUM, 1978, 40 (1-2): : 3 - 8
  • [25] A REMARK ON CERTAIN CONVOLUTION OPERATOR
    刘金林
    [J]. 苏州大学学报(自然科学版), 1993, (02) : 128 - 130
  • [26] Boundedness for the commutator of convolution operator
    Tang, L
    [J]. JOURNAL OF MATHEMATICAL ANALYSIS AND APPLICATIONS, 2003, 279 (02) : 545 - 555
  • [27] An Energy-efficient Convolution Unit for Depthwise Separable Convolutional Neural Networks
    Chong, Yi Sheng
    Goh, Wang Ling
    Ong, Yew Soon
    Nambiar, Vishnu P.
    Do, Anh Tuan
    [J]. 2021 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS), 2021,
  • [28] Energy-Efficient Parameterized 2-D Separable Convolution on FPGA
    Hu, Yusong
    Prasanna, Viktor K.
    [J]. 2014 INTERNATIONAL GREEN COMPUTING CONFERENCE (IGCC), 2014,
  • [29] Semantic Segmentation of Retinal Vessel Images via Dense Convolution and Depth Separable Convolution
    Zhu, Zihui
    Gu, Hengrui
    Zhang, Zhengming
    Huang, Yongming
    Yang, Luxi
    [J]. PROCEEDINGS OF THE 2019 IEEE INTERNATIONAL WORKSHOP ON SIGNAL PROCESSING SYSTEMS (SIPS 2019), 2019, : 137 - 142
  • [30] SpiralNet plus plus : A Fast and Highly Efficient Mesh Convolution Operator
    Gong, Shunwang
    Chen, Lei
    Bronstein, Michael
    Zafeiriou, Stefanos
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW), 2019, : 4141 - 4148