Molecular Sparse Representation by a 3D Ellipsoid Radial Basis Function Neural Network via L1 Regularization

被引:3
|
作者
Gui, Sheng [1 ,2 ,3 ]
Chen, Zhaodi [3 ]
Lu, Benzhuo [1 ,2 ]
Chen, Minxin [3 ]
机构
[1] Chinese Acad Sci, Acad Math & Syst Sci, Natl Ctr Math & Interdisciplinary Sci, State Key Lab Sci & Engn Comp, Beijing 100190, Peoples R China
[2] Univ Chinese Acad Sci, Sch Math Sci, Beijing 100049, Peoples R China
[3] Soochow Univ, Dept Math, Suzhou 215006, Peoples R China
关键词
GAUSSIAN SURFACE; EQUIVALENCE; GENERATION; EFFICIENT; ALGORITHM; MODELS;
D O I
10.1021/acs.jcim.0c00585
中图分类号
R914 [药物化学];
学科分类号
100701 ;
摘要
The three-dimensional structures and shapes of biomolecules provide essential information about their interactions and functions. Unfortunately, the computational cost of bio-molecular shape representation is an active challenge which increases rapidly as the number of atoms increase. Recent developments in sparse representation and deep learning have shown significant improvements in terms of time and space. A sparse representation of molecular shape is also useful in various other applications, such as molecular structure alignment, docking, and coarse-grained molecular modeling. We have developed an ellipsoid radial basis function neural network (ERBFNN) and an algorithm for sparsely representing molecular shape. To evaluate a sparse representation model of molecular shape, the Gaussian density map of the molecule is approximated using ERBFNN with a relatively small number of neurons. The deep learning models were trained by optimizing a nonlinear loss function with L1 regularization. Experimental results reveal that our algorithm can represent the original molecular shape with a relatively higher accuracy and fewer scale of ERBFNN. Our network in principle is applicable to the multiresolution sparse representation of molecular shape and coarse-grained molecular modeling. Executable files are available at https://github.com/SGUI-LSEC/SparseGaussianMolecule. The program was implemented in PyTorch and was run on Linux.
引用
收藏
页码:6054 / 6064
页数:11
相关论文
共 50 条
  • [1] An Improved Radial Basis Function Neuron Network Based on the l1 Regularization
    Kang, Yunling
    Liu, Manxi
    You, Guoqiao
    Liu, Guidong
    [J]. INTERNATIONAL JOURNAL OF COMPUTATIONAL METHODS, 2023, 20 (10)
  • [2] A Simple Neural Network for Sparse Optimization With l1 Regularization
    Ma, Litao
    Bian, Wei
    [J]. IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2021, 8 (04): : 3430 - 3442
  • [3] Efficient construction of sparse radial basis function neural networks using L1-regularization
    Qian, Xusheng
    Huang, He
    Chen, Xiaoping
    Huang, Tingwen
    [J]. NEURAL NETWORKS, 2017, 94 : 239 - 254
  • [4] Sparse Hopfield network reconstruction with l1 regularization
    Huang, Haiping
    [J]. EUROPEAN PHYSICAL JOURNAL B, 2013, 86 (11):
  • [5] 3D crosswell electromagnetic inversion based on radial basis function neural network
    Fang, Sinan
    Zhang, Zhansong
    Chen, Wei
    Pan, Heping
    Peng, Jun
    [J]. ACTA GEOPHYSICA, 2020, 68 (03) : 711 - 721
  • [6] 3D crosswell electromagnetic inversion based on radial basis function neural network
    Sinan Fang
    Zhansong Zhang
    Wei Chen
    Heping Pan
    Jun Peng
    [J]. Acta Geophysica, 2020, 68 : 711 - 721
  • [7] Resolution Enhancement for LASAR 3D Imaging via l1 Regularization and SVA
    Xiang, Gao
    Zhang, Xiaoling
    Shi, Jun
    Wei, Shunjun
    [J]. PROGRESS IN ELECTROMAGNETICS RESEARCH M, 2015, 41 : 95 - 104
  • [8] A recursive least squares algorithm with l1 regularization for sparse representation
    Liu, Di
    Baldi, Simone
    Liu, Quan
    Yu, Wenwu
    [J]. SCIENCE CHINA-INFORMATION SCIENCES, 2023, 66 (02)
  • [9] SPARSE REPRESENTATION LEARNING OF DATA BY AUTOENCODERS WITH L1/2 REGULARIZATION
    Li, F.
    Zurada, J. M.
    Wu, W.
    [J]. NEURAL NETWORK WORLD, 2018, 28 (02) : 133 - 147
  • [10] Transformed l1 regularization for learning sparse deep neural networks
    Ma, Rongrong
    Miao, Jianyu
    Niu, Lingfeng
    Zhang, Peng
    [J]. NEURAL NETWORKS, 2019, 119 : 286 - 298