Euclidean Contractivity of Neural Networks With Symmetric Weights

被引:5
|
作者
Centorrino, Veronica [1 ]
Gokhale, Anand [2 ]
Davydov, Alexander [2 ]
Russo, Giovanni [3 ]
Bullo, Francesco [2 ]
机构
[1] Univ Naples Federico II, Scuola Super Meridionale, I-80138 Naples, Italy
[2] Univ Calif Santa Barbara, Ctr Control Dynam Syst & Computat, Santa Barbara, CA 93106 USA
[3] Univ Salerno, Dept Informat & Elect Engn & Appl Math, I-84084 Salerno, Italy
来源
关键词
Symmetric matrices; Fuzzy control; Asymptotic stability; Stability criteria; Recurrent neural networks; Numerical stability; Optimization; Neural networks; contraction theory; optimization; stability of nonlinear systems; OPTIMIZATION;
D O I
10.1109/LCSYS.2023.3278250
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This letter investigates stability conditions of continuous-time Hopfield and firing-rate neural networks by leveraging contraction theory. First, we present a number of useful general algebraic results on matrix polytopes and products of symmetric matrices. Then, we give sufficient conditions for strong and weak Euclidean contractivity, i.e., contractivity with respect to the $\ell _{2}$ norm, of both models with symmetric weights and (possibly) non-smooth activation functions. Our contraction analysis leads to contraction rates which are log-optimal in almost all symmetric synaptic matrices. Finally, we use our results to propose a firing-rate neural network model to solve a quadratic optimization problem with box constraints.
引用
收藏
页码:1724 / 1729
页数:6
相关论文
共 50 条
  • [21] Neural networks in non-Euclidean spaces
    Duch, W
    Adamczak, R
    Diercksen, GHF
    NEURAL PROCESSING LETTERS, 1999, 10 (03) : 201 - 210
  • [22] Complete Neural Networks for Complete Euclidean Graphs
    Hordan, Snir
    Amir, Tal
    Gortler, Steven J.
    Dym, Nadav
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 11, 2024, : 12482 - 12490
  • [23] Neural Networks in Non-Euclidean Spaces
    Włodzisław Duch
    Rafał Adamczak
    Geerd H.F. Diercksen
    Neural Processing Letters, 1999, 10 : 201 - 210
  • [24] Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations
    Hubara, Itay
    Courbariaux, Matthieu
    Soudry, Daniel
    El-Yaniv, Ran
    Bengio, Yoshua
    JOURNAL OF MACHINE LEARNING RESEARCH, 2018, 18
  • [25] Routes to chaos in neural networks with random weights
    Albers, DJ
    Sprott, JC
    Dechert, WD
    INTERNATIONAL JOURNAL OF BIFURCATION AND CHAOS, 1998, 8 (07): : 1463 - 1478
  • [26] Learning Neural Networks without Lazy Weights
    Lee, Dong-gi
    Cho, Junhee
    Kim, Myungjun
    Park, Sunghong
    Shin, Hyunjung
    2022 IEEE INTERNATIONAL CONFERENCE ON BIG DATA AND SMART COMPUTING (IEEE BIGCOMP 2022), 2022, : 82 - 87
  • [27] Determination of weights for relaxation recurrent neural networks
    Serpen, G
    Livingston, DL
    NEUROCOMPUTING, 2000, 34 : 145 - 168
  • [28] Neural Networks Between Integer and Rational Weights
    Sima, Jiri
    2017 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2017, : 154 - 161
  • [29] Hardness of Learning Neural Networks with Natural Weights
    Daniely, Amit
    Vardi, Gal
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [30] NEURAL NETWORKS WITH UNIPOLAR WEIGHTS AND NORMALIZED THRESHOLDS
    BRODKA, JS
    MACUKOW, B
    OPTICAL COMPUTING, 1995, 139 : 463 - 466