Convergence analysis of a deterministic discrete time system of Oja's PCA learning algorithm

被引:56
|
作者
Yi, Z [1 ]
Ye, M
Lv, JC
Tan, KK
机构
[1] Univ Elect Sci & Technol China, Sch Engn & Comp Sci, Computat Intelligence Lab, Chengdu 610054, Peoples R China
[2] Natl Univ Singapore, Dept Elect & Comp Engn, Singapore 117576, Singapore
来源
IEEE TRANSACTIONS ON NEURAL NETWORKS | 2005年 / 16卷 / 06期
基金
中国国家自然科学基金; 高等学校博士学科点专项科研基金;
关键词
eigenvalue; eigenvector; neural network; Oja's learning algorithm; principal component analysis (PCA);
D O I
10.1109/TNN.2005.852236
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The convergence of Oja's principal component analysis (PCA) learning algorithms is a difficult topic for direct study and analysis. Traditionally, the convergence of these algorithms is indirectly analyzed via certain deterministic continuous time (DCT) systems. Such a method will require the learning rate to converge to zero, which is not a reasonable requirement to impose in many practical applications. Recently, deterministic discrete time (DDT) systems have been proposed instead to indirectly interpret the dynamics of the learning algorithms. Unlike DCT systems, DDT systems allow learning rates to be constant (which can be a nonzero). This paper will provide some important results relating to the convergence of a DDT system of Oja's PCA learning algorithm. It has the following contributions: 1) A number of invariant sets are obtained, based on which we can show that any trajectory starting from a point in the invariant set will remain in the set forever. Thus, the nondivergence of the trajectories is guaranteed. 2) The convergence of the DDT system is analyzed rigorously. It is proven, in the paper, that almost all trajectories of the system starting from points in an invariant set will converge exponentially to the unit eigenvector associated with the largest eigenvalue of the correlation matrix. In addition, exponential convergence rate are obtained, providing useful guidelines for the selection of fast convergence learning rate. 3) Since the trajectories may diverge, the careful choice of initial vectors is an important issue. This paper suggests to use the domain of unit hyper sphere as initial vectors to guarantee convergence. 4) Simulation results will be furnished to illustrate the theoretical results achieved.
引用
收藏
页码:1318 / 1328
页数:11
相关论文
共 50 条
  • [1] Convergence analysis of a deterministic discrete time system of Feng's MCA learning algorithm
    Peng, Dezhong
    Yi, Zhang
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2006, 54 (09) : 3626 - 3632
  • [2] Convergence analysis of Xu's LMSER learning algorithm via deterministic discrete time system method
    Lv, Jian Cheng
    Yi, Zhang
    Tan, K. K.
    NEUROCOMPUTING, 2006, 70 (1-3) : 362 - 372
  • [3] Convergence analysis of deterministic discrete time system of a unified self-stabilizing algorithm for PCA and MCA
    Kong, Xiangyu
    An, Qiusheng
    Ma, Hongguang
    Han, Chongzhao
    Zhang, Qi
    NEURAL NETWORKS, 2012, 36 : 64 - 72
  • [4] Convergence analysis of the OJAn MCA learning algorithm by the deterministic discrete time method
    Peng, Dezhong
    Yi, Zhang
    THEORETICAL COMPUTER SCIENCE, 2007, 378 (01) : 87 - 100
  • [5] Global convergence of Oja's PCA learning algorithm with a non-zero-approaching adaptive learning rate
    Lv, Jian Cheng
    Yi, Zhang
    Tan, K. K.
    THEORETICAL COMPUTER SCIENCE, 2006, 367 (03) : 286 - 307
  • [6] Convergence analysis for Oja plus MCA learning algorithm
    Lv, JC
    Ye, M
    Yi, Z
    ADVANCES IN NEURAL NETWORKS - ISNN 2004, PT 1, 2004, 3173 : 810 - 814
  • [7] Oja’s Algorithm for Streaming Sparse PCA
    Department of Computer Science, University of Texas, Austin, United States
    不详
    arXiv,
  • [8] On the optimality of the Oja's algorithm for online PCA
    Liang, Xin
    STATISTICS AND COMPUTING, 2023, 33 (03)
  • [9] On the optimality of the Oja’s algorithm for online PCA
    Xin Liang
    Statistics and Computing, 2023, 33 (3)
  • [10] Convergence analysis of Chauvin's PCA learning algorithm with a constant learning rate
    Lv Jian Cheng
    Yi Zhang
    CHAOS SOLITONS & FRACTALS, 2007, 32 (04) : 1562 - 1571