Toward Deep Adaptive Hinging Hyperplanes

被引:0
|
作者
Tao, Qinghua [1 ,2 ]
Xu, Jun [3 ]
Li, Zhen [3 ]
Xie, Na [4 ]
Wang, Shuning [2 ]
Li, Xiaoli [5 ]
Suykens, Johan A. K. [1 ]
机构
[1] Katholieke Univ Leuven, STADIUS, ESAT, B-3001 Leuven, Belgium
[2] Tsinghua Univ, Dept Automat, Beijing 100084, Peoples R China
[3] Harbin Inst Technol, Sch Mech Engn & Automat, Shenzhen 518055, Peoples R China
[4] Cent Univ Finance & Econ, Sch Management Sci & Engn, Beijing 100081, Peoples R China
[5] Beijing Univ Technol, Fac Informat Technol, Beijing 100124, Peoples R China
基金
中国国家自然科学基金; 欧洲研究理事会;
关键词
Neurons; Artificial neural networks; Network topology; Training; Topology; Optimization; Adaptive systems; Adaptive hinging hyperplanes (AHHs); analysis of variance (ANOVA) decomposition; domain partition; piecewise linear (PWL); skip-layer connection; REGRESSION; SELECTION;
D O I
10.1109/TNNLS.2021.3079113
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The adaptive hinging hyperplane (AHH) model is a popular piecewise linear representation with a generalized tree structure and has been successfully applied in dynamic system identification. In this article, we aim to construct the deep AHH (DAHH) model to extend and generalize the networking of AHH model for high-dimensional problems. The network structure of DAHH is determined through a forward growth, in which the activity ratio is introduced to select effective neurons and no connecting weights are involved between the layers. Then, all neurons in the DAHH network can be flexibly connected to the output in a skip-layer format, and only the corresponding weights are the parameters to optimize. With such a network framework, the backpropagation algorithm can be implemented in DAHH to efficiently tackle large-scale problems and the gradient vanishing problem is not encountered in the training of DAHH. In fact, the optimization problem of DAHH can maintain convexity with convex loss in the output layer, which brings natural advantages in optimization. Different from the existing neural networks, DAHH is easier to interpret, where neurons are connected sparsely and analysis of variance (ANOVA) decomposition can be applied, facilitating to revealing the interactions between variables. A theoretical analysis toward universal approximation ability and explicit domain partitions are also derived. Numerical experiments verify the effectiveness of the proposed DAHH.
引用
收藏
页码:6373 / 6387
页数:15
相关论文
共 50 条
  • [41] Toward Electrophysiology-Based Intelligent Adaptive Deep Brain Stimulation for Movement Disorders
    Wolf-Julian Neumann
    Robert S. Turner
    Benjamin Blankertz
    Tom Mitchell
    Andrea A. Kühn
    R. Mark Richardson
    [J]. Neurotherapeutics, 2019, 16 : 105 - 118
  • [42] A Path Toward Explainable AI and Autonomous Adaptive Intelligence: Deep Learning, Adaptive Resonance, and Models of Perception, Emotion, and Action
    Grossberg, Stephen
    [J]. FRONTIERS IN NEUROROBOTICS, 2020, 14
  • [43] Multivariate adaptive regression (MARS) and hinged hyperplanes (HHP) for doweled pavement performance modeling
    Attoh-Okine, Nii O.
    Cooger, Ken
    Mensah, Stephen
    [J]. CONSTRUCTION AND BUILDING MATERIALS, 2009, 23 (09) : 3020 - 3023
  • [44] Classification of LIBS Protein Spectra Using Support Vector Machines and Adaptive Local Hyperplanes
    Vance, Tia
    Reljin, Natasa
    Lazarevic, Aleksandar
    Pokrajac, Dragoljub
    Kecman, Vojislav
    Melikechi, Noureddine
    Marcano, Aristides
    Markushin, Yuri
    McDaniel, Samantha
    [J]. 2010 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS IJCNN 2010, 2010,
  • [45] Toward Fine-grained Image Retrieval with Adaptive Deep Learning for Cultural Heritage Image
    Prasomphan, Sathit
    [J]. COMPUTER SYSTEMS SCIENCE AND ENGINEERING, 2023, 44 (02): : 1295 - 1307
  • [46] CAQ: Toward Context-Aware and Self-Adaptive Deep Model Computation for AIoT Applications
    Liu, Sicong
    Wu, Yungang
    Guo, Bin
    Wang, Yuzhan
    Ma, Ke
    Xiang, Liyao
    Li, Zhetao
    Yu, Zhiwen
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (21) : 20801 - 20814
  • [47] Deep-Learning Dose Prediction as First Step Toward Real-Time Adaptive Replanning
    Buchanan, L.
    Chen, Z.
    Xhang, W.
    Zhou, Q.
    Schott, D.
    Li, X.
    [J]. MEDICAL PHYSICS, 2020, 47 (06) : E599 - E599
  • [48] Toward Adaptive Rough Sets
    Dutta, Soma
    Skowron, Andrzej
    [J]. ROUGH SETS, 2017, 10313 : 165 - 184
  • [49] Toward approximate adaptive learning
    Peters, James F.
    [J]. ROUGH SETS AND INTELLIGENT SYSTEMS PARADIGMS, PROCEEDINGS, 2007, 4585 : 57 - 68
  • [50] Toward teachers' adaptive metacognition
    Lin, XD
    Schwartz, DL
    Hatano, G
    [J]. EDUCATIONAL PSYCHOLOGIST, 2005, 40 (04) : 245 - 255