The Bias-Expressivity Trade-off

被引:3
|
作者
Lauw, Julius [1 ]
Macias, Dominique [1 ]
Trikha, Akshay [1 ]
Vendemiatti, Julia [1 ]
Montanez, George D. [1 ]
机构
[1] Harvey Mudd Coll, Dept Comp Sci, AMISTAD Lab, Claremont, CA 91711 USA
来源
ICAART: PROCEEDINGS OF THE 12TH INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE, VOL 2 | 2020年
关键词
Machine Learning; Algorithmic Search; Inductive Bias; Entropic Expressivity;
D O I
10.5220/0008959201410150
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Learning algorithms need bias to generalize and perform better than random guessing. We examine the flexibility (expressivity) of biased algorithms. An expressive algorithm can adapt to changing training data, altering its outcome based on changes in its input. We measure expressivity by using an information-theoretic notion of entropy on algorithm outcome distributions, demonstrating a trade-off between bias and expressivity. To the degree an algorithm is biased is the degree to which it can outperform uniform random sampling, but is also the degree to which is becomes inflexible. We derive bounds relating bias to expressivity, proving the necessary trade-offs inherent in trying to create strongly performing yet flexible algorithms.
引用
收藏
页码:141 / 150
页数:10
相关论文
共 50 条