Analysis of Information Set Decoding for a Sub-linear Error Weight

被引:49
|
作者
Torres, Rodolfo Canto [1 ,2 ]
Sendrier, Nicolas [1 ]
机构
[1] Inria, Rocquencourt, France
[2] Inria, 2 Rue Simone IFF, Paris, France
来源
关键词
APPROXIMATION COMPLEXITY; CODE;
D O I
10.1007/978-3-319-29360-8_10
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The security of code-based cryptography is strongly related to the hardness of generic decoding of linear codes. The best known generic decoding algorithms all derive from the Information Set Decoding algorithm proposed by Prange in 1962. The ISD algorithm was later improved by Stern in 1989 (and Dumer in 1991). Those last few years, some significant improvements have occurred. First by May, Meurer, and Thomae at Asiacrypt 2011, then by Becker, Joux, May, and Meurer at Eurocrypt 2012, and finally by May and Ozerov at Eurocrypt 2015. With those methods, correcting w errors in a binary linear code of length n and dimension k has a cost 2(cw(1+o(1))) when the length n grows, where c is a constant, depending of the code rate k/n and of the error rate w/n. The above ISD variants have all improved that constant c when they appeared. When the number of errors w is sub-linear, w = o(n), the cost of all ISD variants still has the form 2(cw(1+o(1))). We prove here that the constant c only depends of the code rate k/n and is the same for all the known ISD variants mentioned above, including the fifty years old Prange algorithm. The most promising variants of McEliece encryption scheme use either Goppa codes, with w = O(n/log(n)), or MDPC codes, with w = O(root n). Our result means that, in those cases, when we scale up the system parameters, the improvement of the latest variants of ISD become less and less significant. This fact has been observed already, we give here a formal proof of it. Moreover, our proof seems to indicate that any foreseeable variant of ISD should have the same asymptotic behavior.
引用
收藏
页码:144 / 161
页数:18
相关论文
共 50 条
  • [21] Sub-Linear Memory: How to Make Performers SLiM
    Likhosherstov, Valerii
    Choromanski, Krzysztof
    Davis, Jared
    Song, Xingyou
    Weller, Adrian
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021,
  • [22] SUB-LINEAR ESTIMATE OF LARGE VELOCITIES IN A COLLISIONLESS PLASMA
    Chen, Zili
    Zhang, Xianwen
    COMMUNICATIONS IN MATHEMATICAL SCIENCES, 2014, 12 (02) : 279 - 291
  • [23] Sub-linear queries statistical databases: Privacy with power
    Dwork, C
    TOPICS IN CRYPTOLOGY - CT-RSA 2005, PROCEEDINGS, 2005, 3376 : 1 - 6
  • [24] Distributed Generative Modelling with Sub-linear Communication Overhead
    Piatkowski, Nico
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2019, PT I, 2020, 1167 : 281 - 292
  • [25] Distributed Lifelong Reinforcement Learning with Sub-Linear Regret
    Tutunov, Rasul
    El-Zini, Julia
    Bou-Ammar, Haitham
    Jadbabaie, Ali
    2017 IEEE 56TH ANNUAL CONFERENCE ON DECISION AND CONTROL (CDC), 2017,
  • [26] Nonuniform Complexity Classes with Sub-Linear Advice Functions
    Bull Eur Assoc Theor Comput Sci, 60 (302):
  • [27] Strong laws of large numbers for sub-linear expectations
    CHEN ZengJing
    ScienceChina(Mathematics), 2016, 59 (05) : 945 - 954
  • [28] Sub-linear dependencies of the surface conductivity on the gas pressure
    Aroutiounian, VM
    Aghababian, GS
    APPLIED SURFACE SCIENCE, 1998, 135 (1-4) : 1 - 7
  • [29] Existence of solutions for sub-linear or super-linear operator equations
    Chen YingYing
    Dong YuJun
    Shan Yuan
    SCIENCE CHINA-MATHEMATICS, 2015, 58 (08) : 1653 - 1664
  • [30] A Fast Hadamard Transform for Signals with Sub-Linear Sparsity
    Scheibler, Robin
    Haghighatshoar, Saeid
    Vetterli, Martin
    2013 51ST ANNUAL ALLERTON CONFERENCE ON COMMUNICATION, CONTROL, AND COMPUTING (ALLERTON), 2013, : 1250 - 1257