Repeated Potentiality Augmentation for Multi-layered Neural Networks

被引:0
|
作者
Kamimura, Ryotaro [1 ,2 ]
机构
[1] Tokai Univ, 2880 Kamimatsuo Nishi Ku, Kumamoto 8615289, Japan
[2] Kumamoto Drone Technol & Dev Fdn, 2880 Kamimatsuo Nishi Ku, Kumamoto 8615289, Japan
来源
ADVANCES IN INFORMATION AND COMMUNICATION, FICC, VOL 2 | 2023年 / 652卷
关键词
Equi-potentiality; Total potentiality; Relative potentiality; Collective interpretation; Partial interpretation; MUTUAL INFORMATION; LEARNING-MODELS; CLASSIFICATION; MAXIMIZE; INPUT; MAPS;
D O I
10.1007/978-3-031-28073-3_9
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The present paper proposes a new method to augment the potentiality of components in neural networks. The basic hypothesis is that all components should have equal potentiality (equi-potentiality) to be used for learning. This equi-potentiality of components has implicitly played critical roles in improving multi-layered neural networks. We introduce here the total potentiality and relative potentiality for each hidden layer, and we try to force networks to increase the potentiality as much as possible to realize the equi-potentiality. In addition, the potentiality augmentation is repeated at any time the potentiality tends to decrease, which is used to increase the chance for any components to be used as equally as possible. We applied the method to the bankruptcy data set. By keeping the equi-potentiality of components by repeating the process of potentiality augmentation and reduction, we could see improved generalization. Then, by considering all possible representations by the repeated potentiality augmentation, we can interpret which inputs can contribute to the final performance of networks.
引用
收藏
页码:117 / 134
页数:18
相关论文
共 50 条