Standard neural network architectures are non-linear only by virtue of a simple element-wise activation function, making them both brittle and excessively large. In this paper, we consider methods for making the feed-forward layer more flexible while preserving its basic structure. We develop simple drop-in replacements that learn to adapt their parameterization conditional on the input, thereby increasing statistical efficiency significantly. We present an adaptive LSTM that advances the state of the art for the Penn Treebank and WikiText-2 word-modeling tasks while using fewer parameters and converging in less than half the number of iterations.
机构:
Shanghai Municipal Ctr Dis Control & Prevent, Shanghai 200336, Peoples R ChinaShanghai Municipal Ctr Dis Control & Prevent, Shanghai 200336, Peoples R China
Chen Liang
Xie He
论文数: 0引用数: 0
h-index: 0
机构:
Shanghai Municipal Ctr Dis Control & Prevent, Shanghai 200336, Peoples R ChinaShanghai Municipal Ctr Dis Control & Prevent, Shanghai 200336, Peoples R China
Xie He
Sun Cheng-ye
论文数: 0引用数: 0
h-index: 0
机构:
Shanghai Municipal Ctr Dis Control & Prevent, Shanghai 200336, Peoples R ChinaShanghai Municipal Ctr Dis Control & Prevent, Shanghai 200336, Peoples R China
Sun Cheng-ye
PROCEEDINGS OF THE 5TH INTERNATIONAL ACADEMIC CONFERENCE ON ENVIRONMENTAL AND OCCUPATIONAL MEDICINE,
2010,
: 40
-
40
机构:
City Univ Hong Kong, Informat Syst, Hong Kong, Hong Kong, Peoples R China
Claremont Grad Univ, Claremont, CA USACity Univ Hong Kong, Informat Syst, Hong Kong, Hong Kong, Peoples R China