A design method of an MLP minimizing the quantization effect of the weights and the neuron outputs

被引:0
|
作者
Kwon, OJ [1 ]
Bang, SY [1 ]
机构
[1] Pohang Univ Sci & Technol, Dept Comp Sci & Engn, Nam Gu, Pohang 790784, South Korea
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
When we implement a multilayer perceptron with the digital VLSI technology, we generally have to quantize the weights and the neuron outputs. These quantizations eventually cause some distortion in the output of the network for a given input. In this paper, based on our earlier analysis about the effect caused by these quantization, we present a design method of an MLP which minimizes the quantization effect when the precision of the quantization is given. In order to show the effectiveness of the proposed method, we developed a network by our method and compared it with the one developed by the regular backpropagation. We could confirm that the network developed by our method performs better even with a low precision of the quantization.
引用
收藏
页码:279 / 282
页数:4
相关论文
共 50 条