Neural self-compressor: Collective interpretation by compressing multi-layered neural networks into non-layered networks

被引:14
|
作者
Kamimura, Ryotaro [1 ]
机构
[1] Tokai Univ, IT Educ Ctr, 4-1-1 Kitakaname, Hiratsuka, Kanagawa 2591292, Japan
基金
日本学术振兴会;
关键词
Model compression; self-compression; collective interpretation; mutual informaton; multi-layered neural networks; INFORMATION MAXIMIZATION; MUTUAL INFORMATION; RULE EXTRACTION;
D O I
10.1016/j.neucom.2018.09.036
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The present paper proposes a new method called "neural self-compressors" to compress multi-layered neural networks into the simplest possible ones (i.e., without hidden layers) to aid in the interpretation of relations between inputs and outputs. Though neural networks have shown great success in improving generalization, the interpretation of internal representations becomes a serious problem as the number of hidden layers and their corresponding connection weights becomes larger and larger. To overcome this interpretation problem, we introduce a method that compresses multi-layered neural networks into ones without hidden layers. In addition, this method simplifies entangled weights as much as possible by maximizing mutual information between inputs and outputs. In this way, final connection weights can be interpreted as easily as by the logistic regression analysis. The method was applied to four data sets: a symmetric data set, ovarian cancer data set, restaurant data set, and credit card holders' default data set. In the first set, the symmetric data set, we tried to explain how the present method could produce interpretable outputs intuitively. In all the other cases, we succeeded in compressing multi-layered neural networks into their simplest forms with the help of mutual information maximization. In addition, by de-correlating outputs, we were able to transform connection weights from those close to the regression coefficients to ones with more explicit features. (C) 2018 Elsevier B.V. All rights reserved.
引用
收藏
页码:12 / 36
页数:25
相关论文
共 50 条