Efficient Sparse Networks from Watts-Strogatz Network Priors

被引:0
|
作者
Traub, Tamas [1 ]
Nashouqu, Mohamad [1 ]
Gulyas, Laszlo [1 ]
机构
[1] Eotvos Lorand Univ, Inst Ind Acad Innovat, Fac Informat, Dept Artificial Intelligence, Budapest, Hungary
关键词
Sparse Neural Networks; Network Science; Deep Learning; Graph Theory;
D O I
10.1007/978-3-031-41456-5_13
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper studies the accuracy and the structural properties of sparse neural networks (SNNs) generated by weight pruning and by using Watts-Strogatz network priors. The study involves Multi-Layer Perceptron (MLP) and Long-Short Term Memory (LSTM) architectures, trained on the MNIST dataset. The paper replicates and extends previous work, showing that networks generated by appropriately selected WS priors guarantee high-quality results, and that these networks outperform pruned networks in terms of accuracy. In addition, observations are made with regard to the structural change induced by network pruning and its implications for accuracy. The findings of this study provide important insights for creating lighter models with lower computational needs, which can achieve results comparable to more complex models.
引用
收藏
页码:163 / 175
页数:13
相关论文
共 50 条