Continuity of Approximation by Neural Networks in Lp Spaces

被引:0
|
作者
Paul C. Kainen
Věra Kůrková
Andrew Vogt
机构
[1] Georgetown University,Department of Mathematics
[2] Academy of Sciences of the Czech Republic,Institute of Computer Science
来源
关键词
Chebyshev set; strictly convex space; boundedly compact; continuous selection; near best approximation;
D O I
暂无
中图分类号
学科分类号
摘要
Devices such as neural networks typically approximate the elements of some function space X by elements of a nontrivial finite union M of finite-dimensional spaces. It is shown that if X=Lp(Ω) (1<p<∞ and Ω⊂Rd), then for any positive constant Γ and any continuous function φ from X to M, ‖f−φ(f)‖>‖f−M‖+Γ for some f in X. Thus, no continuous finite neural network approximation can be within any positive constant of a best approximation in the Lp-norm.
引用
收藏
页码:143 / 147
页数:4
相关论文
共 50 条