Intractability of Learning the Discrete Logarithm with Gradient-Based Methods

被引:0
|
作者
Takhanov, Rustem [1 ]
Tezekbayev, Maxat [1 ]
Pak, Artur [1 ]
Bolatov, Arman [2 ]
Kadyrsizova, Zhibek [1 ]
Assylbekov, Zhenisbek [3 ]
机构
[1] Nazarbayev Univ, Dept Math, Astana, Kazakhstan
[2] Nazarbayev Univ, Dept Comp Sci, Astana, Kazakhstan
[3] Purdue Univ Ft Wayne, Dept Math Sci, Ft Wayne, IN USA
关键词
Discrete Logarithm; Gradient-based Learning; Cryptographic Protocols;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The discrete logarithm problem is a fundamental challenge in number theory with significant implications for cryptographic protocols. In this paper, we investigate the limitations of gradient-based methods for learning the parity bit of the discrete logarithm in finite cyclic groups of prime order. Our main result, supported by theoretical analysis and empirical verification, reveals the concentration of the gradient of the loss function around a fixed point, independent of the logarithm's base used. This concentration property leads to a restricted ability to learn the parity bit efficiently using gradient-based methods, irrespective of the complexity of the network architecture being trained. Our proof relies on Boas-Bellman inequality in inner product spaces and it involves establishing approximate orthogonality of discrete logarithm's parity bit functions through the spectral norm of certain matrices. Empirical experiments using a neural network-based approach further verify the limitations of gradient-based learning, demonstrating the decreasing success rate in predicting the parity bit as the group order increases.
引用
收藏
页数:16
相关论文
共 50 条
  • [1] Adaptive Gradient-Based Meta-Learning Methods
    Khodak, Mikhail
    Balcan, Maria-Florina
    Talwalkar, Ameet
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [2] Discrete optimization via gradient-based adaptive stochastic search methods
    Chen, Xi
    Zhou, Enlu
    Hu, Jiaqiao
    [J]. IISE TRANSACTIONS, 2018, 50 (09) : 789 - 805
  • [3] Gradient-Based Learning of Discrete Structured Measurement Operators for Signal Recovery
    Sauder, Jonathan
    Genzel, Martin
    Jung, Peter
    [J]. IEEE Journal on Selected Areas in Information Theory, 2022, 3 (03): : 481 - 492
  • [4] Learning Supervised PageRank with Gradient-Based and Gradient-Free Optimization Methods
    Bogolubsky, Lev
    Gusev, Gleb
    Raigorodskii, Andrei
    Tikhonov, Aleksey
    Zhukovskii, Maksim
    Dvurechensky, Pavel
    Gasnikov, Alexander
    Nesterov, Yurii
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 29 (NIPS 2016), 2016, 29
  • [5] Gradient-based learning and optimization
    Cao, XR
    [J]. PROCEEDINGS OF THE 17TH INTERNATIONAL SYMPOSIUM ON COMPUTER AND INFORMATION SCIENCES, 2003, : 3 - 7
  • [6] Robust monotone gradient-based discrete-time iterative learning control
    Owens, D. H.
    Hatonen, J. J.
    Daley, S.
    [J]. INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, 2009, 19 (06) : 634 - 661
  • [7] Gradient-Based Discrete-Time Concurrent Learning for Standalone Function Approximation
    Djaneye-Boundjou, Ouboti
    Ordonez, Raul
    [J]. IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2020, 65 (02) : 749 - 756
  • [8] Gradient-based defense methods for data leakage in vertical federated learning
    Chang, Wenhan
    Zhu, Tianqing
    [J]. COMPUTERS & SECURITY, 2024, 139
  • [9] A gradient-based approach for discrete optimum design
    Li, Yanyan
    Tan, Tao
    Li, Xingsi
    [J]. STRUCTURAL AND MULTIDISCIPLINARY OPTIMIZATION, 2010, 41 (06) : 881 - 892
  • [10] A gradient-based approach for discrete optimum design
    Yanyan Li
    Tao Tan
    Xingsi Li
    [J]. Structural and Multidisciplinary Optimization, 2010, 41 : 881 - 892