A Universal Approximation Result for Difference of Log-Sum-Exp Neural Networks

被引:26
|
作者
Calafiore, Giuseppe C. [1 ,2 ]
Gaubert, Stephane [3 ,4 ]
Possieri, Corrado [5 ]
机构
[1] Politecn Torino, Dipartimento Elettron & Telecomunicaz, I-10129 Turin, Italy
[2] CNR, Ist Elettron & Ingn Informaz & Telecomunicaz, I-10129 Turin, Italy
[3] INRIA, F-91120 Palaiseau, France
[4] Ecole Polytech, UMR CNRS 7641, Ctr Math Appl, F-91128 Palaiseau, France
[5] CNR, Ist Anal Sistemi & Informat A Ruberti, I-00185 Rome, Italy
关键词
Optimization; Numerical models; Feedforward neural networks; Transforms; Data models; Convex functions; Data-driven optimization; difference of convex (DC) programming; feedforward neural networks (FFNs); log-sum-exp (LSE) networks; subtraction-free expressions; surrogate models; universal approximation; DC; OPTIMIZATION; GLUCOSE;
D O I
10.1109/TNNLS.2020.2975051
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We show that a neural network whose output is obtained as the difference of the outputs of two feedforward networks with exponential activation function in the hidden layer and logarithmic activation function in the output node, referred to as log-sum-exp (LSE) network, is a smooth universal approximator of continuous functions over convex, compact sets. By using a logarithmic transform, this class of network maps to a family of subtraction-free ratios of generalized posynomials (GPOS), which we also show to be universal approximators of positive functions over log-convex, compact subsets of the positive orthant. The main advantage of difference-LSE networks with respect to classical feedforward neural networks is that, after a standard training phase, they provide surrogate models for a design that possesses a specific difference-of-convex-functions form, which makes them optimizable via relatively efficient numerical methods. In particular, by adapting an existing difference-of-convex algorithm to these models, we obtain an algorithm for performing an effective optimization-based design. We illustrate the proposed approach by applying it to the data-driven design of a diet for a patient with type-2 diabetes and to a nonconvex optimization problem.
引用
收藏
页码:5603 / 5612
页数:10
相关论文
共 50 条
  • [1] Log-Sum-Exp Neural Networks and Posynomial Models for Convex and Log-Log-Convex Data
    Calafiore, Giuseppe C.
    Gaubert, Stephane
    Possieri, Corrado
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 31 (03) : 827 - 838
  • [2] On the Use of Difference of Log-Sum-Exp Neural Networks to Solve Data-Driven Model Predictive Control Tracking Problems
    Brueggemann, Sven
    Possieri, Corrado
    [J]. IEEE CONTROL SYSTEMS LETTERS, 2021, 5 (04): : 1267 - 1272
  • [3] Efficient model-free Q-factor approximation in value space via log-sum-exp neural networks
    Calafiore, Giuseppe C.
    Possieri, Corrado
    [J]. 2020 EUROPEAN CONTROL CONFERENCE (ECC 2020), 2020, : 23 - 28
  • [4] On the Use of Difference of Log-Sum-Exp Neural Networks to Solve Data-Driven Model Predictive Control Tracking Problems
    Brueggemann, Sven
    Possieri, Corrado
    [J]. 2021 AMERICAN CONTROL CONFERENCE (ACC), 2021, : 448 - 453
  • [5] Accurately computing the log-sum-exp and softmax functions
    Blanchard, Pierre
    Higham, Desmond J.
    Higham, Nicholas J.
    [J]. IMA JOURNAL OF NUMERICAL ANALYSIS, 2021, 41 (04) : 2311 - 2330
  • [6] VTOL Aircraft Optimal Gain Prediction via Parameterized Log-Sum-Exp Networks
    Kim, Jinrae
    Lee, Hanna
    Ko, Donghyeon
    Kim, Youdan
    [J]. 2023 EUROPEAN CONTROL CONFERENCE, ECC, 2023,
  • [7] An equivalence between log-sum-exp approximation and entropy regularization in K-means clustering
    Inoue, Kohei
    Hara, Kenji
    [J]. IEICE NONLINEAR THEORY AND ITS APPLICATIONS, 2020, 11 (04): : 446 - 453
  • [8] Log-sum-exp Optimization based on Continuous Piecewise Linearization Techniques
    Xi, Xiangming
    Xu, Jun
    Lou, Yunjiang
    [J]. 2020 IEEE 16TH INTERNATIONAL CONFERENCE ON CONTROL & AUTOMATION (ICCA), 2020, : 600 - 605
  • [9] Global solutions to nonconvex optimization of 4th-order polynomial and log-sum-exp functions
    Chen, Yi
    Gao, David Y.
    [J]. JOURNAL OF GLOBAL OPTIMIZATION, 2016, 64 (03) : 417 - 431
  • [10] The Power of Log-Sum-Exp: Sequential Density Ratio Matrix Estimation for Speed-Accuracy Optimization
    Miyagawa, Taiki
    Ebihara, Akinori F.
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139