StressedNets: Efficient feature representations via stress-induced evolutionary synthesis of deep neural networks

被引:4
|
作者
Shafiee, Mohammad Javad [1 ]
Chwyl, Brendan [1 ]
Li, Francis [1 ]
Chen, Rongyan [2 ]
Karg, Michelle [2 ]
Scharfenberger, Christian [2 ]
Wong, Alexander [1 ]
机构
[1] Univ Waterloo, Dept Syst Design Engn, Waterloo, ON, Canada
[2] ADC Automot Distance Control Syst GmbH Continenta, Lindau, Germany
基金
加拿大自然科学与工程研究理事会;
关键词
Deep neural networks; Stress-induced evolutionary synthesis; Evolutionary synthesis;
D O I
10.1016/j.neucom.2019.03.028
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The computational complexity of leveraging deep neural networks for extracting deep feature representations is a significant barrier to its widespread adoption. This is particularly a bottleneck for use in embedded devices and application such as self-driving cars. One promising strategy to addressing the complexity issue is the notion of evolutionary synthesis of deep neural networks. It was demonstrated that it successfully produces highly efficient deep neural networks while retaining modeling performance. Here, we further extend upon the evolutionary synthesis strategy for achieving efficient feature extraction. A stress-induced evolutionary synthesis framework is proposed where the stress signals are imposed upon the synapses of a deep neural network during training step. This process induces stress and steers the synthesis process towards the production of more efficient deep neural networks over successive generations. As a result, it improves model fidelity at a greater efficiency. Applying stress during the training phase helps a network to adopt itself for the changes which would happen at the evolution step. The proposed stress-induced evolutionary synthesis approach is evaluated on a variety of different deep neural network architectures (LeNet5, AlexNet, and YOLOv2), different tasks (object classification and object detection) to synthesize efficient StressedNets over multiple generations. Experimental results demonstrate the efficacy of the proposed framework to synthesize StressedNets with significant improvement in network architecture efficiency (e.g., 40 x for AlexNet and 33 x for YOLOv2). It is also shown the speed improvements by the synthesized networks (e.g., 5.5 x inference speed-up for YOLOv2 on an Nvidia Tegra X1 mobile processor). (C) 2019 Elsevier B.V. All rights reserved.
引用
收藏
页码:93 / 105
页数:13
相关论文
共 50 条
  • [41] Deep-efficient-guard: securing wireless ad hoc networks via graph neural network
    Masood S.
    Zafar A.
    International Journal of Information Technology, 2024, 16 (7) : 4111 - 4126
  • [42] AttoNets: Compact and Efficient Deep Neural Networks for the Edge via Human-Machine Collaborative Design
    Wong, Alexander
    Lin, Zhong Qiu
    Chwyl, Brendan
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2019), 2019, : 694 - 703
  • [43] ChebNet: Efficient and Stable Constructions of Deep Neural Networks with Rectified Power Units via Chebyshev Approximation
    Tang, Shanshan
    Li, Bo
    Yu, Haijun
    COMMUNICATIONS IN MATHEMATICS AND STATISTICS, 2024,
  • [44] Thermosensitive receptors in neural stem cells link stress-induced hyperthermia to impaired neurogenesis via microglial engulfment
    Hoshi, Yutaka
    Shibasaki, Koji
    Gailly, Philippe
    Ikegaya, Yuji
    Koyama, Ryuta
    SCIENCE ADVANCES, 2021, 7 (48):
  • [45] Efficient Mapping Between Void Shapes and Stress Fields Using Deep Convolutional Neural Networks With Sparse Data
    Bhaduri, Anindya
    Ramachandra, Nesar
    Ravi, Sandipp Krishnan
    Luan, Lele
    Pandita, Piyush
    Balaprakash, Prasanna
    Anitescu, Mihai
    Sun, Changjie
    Wang, Liping
    JOURNAL OF COMPUTING AND INFORMATION SCIENCE IN ENGINEERING, 2024, 24 (05)
  • [46] EFFICIENT MAPPING BETWEEN VOID SHAPES AND STRESS FIELDS USING DEEP CONVOLUTIONAL NEURAL NETWORKS WITH SPARSE DATA
    Bhaduri, Anindya
    Ramachandra, Nesar
    Ravi, Sandipp Krishnan
    Luan, Lele
    Pandita, Piyush
    Balaprakash, Prasanna
    Anitescu, Mihai
    Sun, Changjie
    Wang, Liping
    PROCEEDINGS OF ASME 2023 INTERNATIONAL DESIGN ENGINEERING TECHNICAL CONFERENCES AND COMPUTERS AND INFORMATION IN ENGINEERING CONFERENCE, IDETC-CIE2023, VOL 2, 2023,
  • [47] Prediction of Power Supply Induced Jitter via Deep Belief and Knowledge-based Neural Networks
    Javaid, Ahsan
    Achar, Ramachandra
    Tripathi, Jai Narayan
    2024 IEEE 28TH WORKSHOP ON SIGNAL AND POWER INTEGRITY, SPI 2024, 2024,
  • [48] Probe Efficient Feature Representation of Gapped K-mer Frequency Vectors from Sequences Using Deep Neural Networks
    Cao, Zhen
    Zhang, Shihua
    IEEE-ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS, 2020, 17 (02) : 657 - 667
  • [49] Rapid Multi-Objective Antenna Synthesis via Deep Neural Network Surrogate-Driven Evolutionary Optimization
    Singh, Praveen
    Panda, Soumyashree S.
    Dash, Jogesh C.
    Riscob, Bright
    Pathak, Surya K.
    Hegde, Ravi S.
    IEEE JOURNAL ON MULTISCALE AND MULTIPHYSICS COMPUTATIONAL TECHNIQUES, 2025, 10 : 151 - 159
  • [50] Multi-Lane Differential Variable Speed Limit Control via Deep Neural Networks Optimized by an Adaptive Evolutionary Strategy
    Feng, Jianshuai
    Shi, Tianyu
    Wu, Yuankai
    Xie, Xiang
    He, Hongwen
    Tan, Huachun
    SENSORS, 2023, 23 (10)