Two-layer tree-connected feed-forward neural network model for neural cryptography

被引:12
|
作者
Lei, Xinyu [1 ]
Liao, Xiaofeng [1 ]
Chen, Fei [2 ]
Huang, Tingwen [3 ]
机构
[1] Chongqing Univ, Coll Comp Sci, Chongqing 400044, Peoples R China
[2] Chinese Univ Hong Kong, Dept Comp Sci & Engn, Hong Kong, Hong Kong, Peoples R China
[3] Texas A&M Univ, Doha, Qatar
来源
PHYSICAL REVIEW E | 2013年 / 87卷 / 03期
基金
中国国家自然科学基金;
关键词
SYNCHRONIZATION; INFORMATION;
D O I
10.1103/PhysRevE.87.032811
中图分类号
O35 [流体力学]; O53 [等离子体物理学];
学科分类号
070204 ; 080103 ; 080704 ;
摘要
Neural synchronization by means of mutual learning provides an avenue to design public key exchange protocols, bringing about what is known as neural cryptography. Two identically structured neural networks learn from each other and reach full synchronization eventually. The full synchronization enables two networks to have the same weight, which can be used as a secret key for many subsequent cryptographic purposes. It is striking to observe that after the first decade of neural cryptography, the tree parity machine (TPM) network with hidden unit K = 3 appears to be the sole network that is suitable for a neural protocol. No convincingly secure neural protocol is well designed by using other network structures despite considerable research efforts. With the goal of overcoming the limitations of a suitable network structure, in this paper we develop a two-layer tree-connected feed-forward neural network (TTFNN) model for a neural protocol. The TTFNN model captures the notion that two partners are capable of exchanging a vector with multiple bits in each time step. An in-depth study of the dynamic process of TTFNN-based protocols is then undertaken, based upon which a feasible condition is theoretically obtained to seek applicable protocols. Afterward, according to two analytically derived heuristic rules, a complete methodology for designing feasible TTFNN-based protocols is elaborated. A variety of feasible neural protocols are constructed, which exhibit the effectiveness and benefits of the proposed model. With another look from the perspective of application, TTFNN-based instances, which can outperform the conventional TPM-based protocol with respect to synchronization speed, are also experimentally confirmed. DOI:10.1103/PhysRevE.87.032811
引用
收藏
页数:12
相关论文
共 50 条
  • [21] Active Learning Based on Single-Hidden Layer Feed-forward Neural Network
    Wang, Ran
    Kwong, Sam
    Jiang, Qingshan
    Wong, Ka-Chun
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC 2015): BIG DATA ANALYTICS FOR HUMAN-CENTRIC SYSTEMS, 2015, : 2158 - 2163
  • [22] Feed-forward multilayer neural network model for vehicle lateral guidance control
    Wang, GW
    Fujiwara, N
    Bao, Y
    [J]. ADVANCED ROBOTICS, 1999, 12 (7-8) : 735 - 753
  • [23] Spike-timing computation properties of a feed-forward neural network model
    Sinha, Drew B.
    Ledbetter, Noah M.
    Barbour, Dennis L.
    [J]. FRONTIERS IN COMPUTATIONAL NEUROSCIENCE, 2014, 8
  • [24] Comparative investigation on dimension reduction and regression in three layer feed-forward neural network
    Shi, Lei
    Xu, Lei
    [J]. ARTIFICIAL NEURAL NETWORKS - ICANN 2006, PT 1, 2006, 4131 : 51 - 60
  • [25] Inverse Halftoning using Multi- Layer Feed-Forward Neural Network (P)
    Hamashoji, Hiroki
    Tanaka, Ken-ichi
    [J]. PROCEEDINGS OF INTERNATIONAL CONFERENCE ON ARTIFICIAL LIFE AND ROBOTICS (ICAROB 2014), 2014, : 138 - 142
  • [26] Number determination of hidden-layer nodes for Hermite feed-forward neural network
    Zhang Y.-N.
    Xiao X.-C.
    Chen Y.-W.
    Zou A.-J.
    [J]. Zhejiang Daxue Xuebao(Gongxue Ban)/Journal of Zhejiang University (Engineering Science), 2010, 44 (02): : 271 - 275
  • [27] Estimating Model Complexity of Feed-Forward Neural Networks
    Landsittel, Douglas
    [J]. JOURNAL OF MODERN APPLIED STATISTICAL METHODS, 2009, 8 (02) : 488 - 504
  • [28] Introduction to multi-layer feed-forward neural networks
    Svozil, D
    Kvasnicka, V
    Pospichal, J
    [J]. CHEMOMETRICS AND INTELLIGENT LABORATORY SYSTEMS, 1997, 39 (01) : 43 - 62
  • [29] Analog feed-forward neural network with on-chip learning
    Univ of Oslo, Oslo, Norway
    [J]. Analog Integr Circuits Signal Process, 1 (65-75):
  • [30] NORMALIZED DATA BARRIER AMPLIFIER FOR FEED-FORWARD NEURAL NETWORK
    Fuangkhon, P.
    [J]. NEURAL NETWORK WORLD, 2021, 31 (02) : 125 - 157