Optimizing Convolutional Neural Network Architectures

被引:0
|
作者
Balderas, Luis [1 ,2 ,3 ,4 ]
Lastra, Miguel [2 ,3 ,4 ,5 ]
Benitez, Jose M. [1 ,2 ,3 ,4 ]
机构
[1] Univ Granada, Dept Comp Sci & Artificial Intelligence, Granada 18071, Spain
[2] Univ Granada, Distributed Computat Intelligence & Time Series La, Granada 18071, Spain
[3] Univ Granada, Sport & Hlth Univ Res Inst, Granada 18071, Spain
[4] Univ Granada, Andalusian Res Inst Data Sci & Computat Intelligen, Granada 18071, Spain
[5] Univ Granada, Dept Software Engn, Granada 18071, Spain
关键词
convolutional neural network simplification; neural network pruning; efficient machine learning; Green AI; LSTM;
D O I
10.3390/math12193032
中图分类号
O1 [数学];
学科分类号
0701 ; 070101 ;
摘要
Convolutional neural networks (CNNs) are commonly employed for demanding applications, such as speech recognition, natural language processing, and computer vision. As CNN architectures become more complex, their computational demands grow, leading to substantial energy consumption and complicating their use on devices with limited resources (e.g., edge devices). Furthermore, a new line of research seeking more sustainable approaches to Artificial Intelligence development and research is increasingly drawing attention: Green AI. Motivated by an interest in optimizing Machine Learning models, in this paper, we propose Optimizing Convolutional Neural Network Architectures (OCNNA). It is a novel CNN optimization and construction method based on pruning designed to establish the importance of convolutional layers. The proposal was evaluated through a thorough empirical study including the best known datasets (CIFAR-10, CIFAR-100, and Imagenet) and CNN architectures (VGG-16, ResNet-50, DenseNet-40, and MobileNet), setting accuracy drop and the remaining parameters ratio as objective metrics to compare the performance of OCNNA with the other state-of-the-art approaches. Our method was compared with more than 20 convolutional neural network simplification algorithms, obtaining outstanding results. As a result, OCNNA is a competitive CNN construction method which could ease the deployment of neural networks on the IoT or resource-limited devices.
引用
收藏
页数:19
相关论文
共 50 条
  • [1] Should You Go Deeper? Optimizing Convolutional Neural Network Architectures without Training
    Richter, Mats L.
    Schoening, Julius
    Wiedenroth, Anna
    Krumnack, Ulf
    [J]. 20TH IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA 2021), 2021, : 964 - 971
  • [2] Optimizing Convolutional Neural Network on DSP
    Jagannathan, Shyam
    Mody, Mihir
    Mathew, Manu
    [J]. 2016 IEEE INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS (ICCE), 2016,
  • [3] Bandwidth Efficient Architectures for Convolutional Neural Network
    Wang, Jichen
    Lin, Jun
    Wang, Zhongfeng
    [J]. PROCEEDINGS OF THE 2018 IEEE INTERNATIONAL WORKSHOP ON SIGNAL PROCESSING SYSTEMS (SIPS), 2018, : 94 - 99
  • [4] A review of convolutional neural network architectures and their optimizations
    Cong, Shuang
    Zhou, Yang
    [J]. ARTIFICIAL INTELLIGENCE REVIEW, 2023, 56 (03) : 1905 - 1969
  • [5] Efficient Convolution Architectures for Convolutional Neural Network
    Wang, Jichen
    Lin, Jun
    Wang, Zhongfeng
    [J]. 2016 8TH INTERNATIONAL CONFERENCE ON WIRELESS COMMUNICATIONS & SIGNAL PROCESSING (WCSP), 2016,
  • [6] A review of convolutional neural network architectures and their optimizations
    Shuang Cong
    Yang Zhou
    [J]. Artificial Intelligence Review, 2023, 56 : 1905 - 1969
  • [7] Optimizing Deep Neural Network Architectures: an overview
    Bouzar-Benlabiod, Lydia
    Rubin, Stuart H.
    Benaida, Amel
    [J]. 2021 IEEE 22ND INTERNATIONAL CONFERENCE ON INFORMATION REUSE AND INTEGRATION FOR DATA SCIENCE (IRI 2021), 2021, : 25 - 32
  • [8] Convolutional Neural Network Architectures for Signals Supported on Graphs
    Gama, Fernando
    Marques, Antonio G.
    Leus, Geert
    Ribeiro, Alejandro
    [J]. IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2019, 67 (04) : 1034 - 1049
  • [9] Efficient Fast Convolution Architectures for Convolutional Neural Network
    Xu, Weihong
    Wang, Zhongfeng
    You, Xiaohu
    Zhang, Chuan
    [J]. 2017 IEEE 12TH INTERNATIONAL CONFERENCE ON ASIC (ASICON), 2017, : 904 - 907
  • [10] Automated Search for Configurations of Convolutional Neural Network Architectures
    Ghamizi, Salah
    Cordy, Maxime
    Papadakis, Mike
    Le Traon, Yves
    [J]. SPLC'19: PROCEEDINGS OF THE 23RD INTERNATIONAL SYSTEMS AND SOFTWARE PRODUCT LINE CONFERENCE, VOL A, 2020, : 119 - 130