Goal-directed molecule generation with fine-tuning by policy gradient

被引:2
|
作者
Sha, Chunli [1 ]
Zhu, Fei [1 ]
机构
[1] Soochow Univ, Sch Comp Sci & Technol, Suzhou 215006, Peoples R China
基金
中国国家自然科学基金;
关键词
Drug design; Graph neural network; Reinforcement learning; Policy gradient; Molecule generation; NETWORK; DESIGN;
D O I
10.1016/j.eswa.2023.123127
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph -structured drug molecule representations often struggle to generate molecules by particular intentions, which results in generated molecules without pharmacological properties. To address the problem, we propose a de novo molecular generation method that utilizes the policy gradient algorithm of reinforcement learning to fine-tune the molecular graph generation model. The training process of the method is divided into the pre -training stage and the fine-tuning stage. During the pre -training stage, it uses graph neural networks and multilayer perceptrons to train a molecule graph generation model. During the fine-tuning stage, scoring functions are devised for multiple goal -directed generation tasks, and subsequently, the policy loss function is formulated based on the reward shaping mechanism. A value network is designed to calculate the value of taking an action based on the current graph state during agent sampling to guide policy updates. To mitigate the issue of molecular uniqueness decline during the learning process, we dynamically adjust the weights of the two learning processes in the policy loss function, aiming to generate desirable molecules with high probability and reduce the descent of uniqueness. The experiments showed that after fine-tuning, the generative model exhibits a higher probability of generating molecules with desired properties compared to other models. Furthermore, our method effectively mitigates the issue of molecular uniqueness declining during the learning process when compared to alternative fine-tuning methods.
引用
收藏
页数:12
相关论文
共 50 条
  • [1] Goal-directed molecule generation with fine-tuning by policy gradient
    Sha, Chunli
    Zhu, Fei
    Expert Systems with Applications, 2024, 246
  • [2] Graph Convolutional Policy Network for Goal-Directed Molecular Graph Generation
    You, Jiaxuan
    Liu, Bowen
    Ying, Rex
    Pande, Vijay
    Leskovec, Jure
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [3] Gradient Sparsification For Masked Fine-Tuning of Transformers
    O'Neill, James
    Dutta, Sourav
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [5] Goal-directed Sequence Generation with Simulation Feedback Method
    Liu, Xinyue
    Tian, Wenbo
    Liang, Wenxin
    Shen, Hua
    PROCEEDINGS OF 2019 IEEE 3RD INFORMATION TECHNOLOGY, NETWORKING, ELECTRONIC AND AUTOMATION CONTROL CONFERENCE (ITNEC 2019), 2019, : 287 - 294
  • [6] Hybrid Semantics for Goal-Directed Natural Language Generation
    Baumler, Connor
    Ray, Soumya
    PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 1936 - 1946
  • [7] Trainable Projected Gradient Method for Robust Fine-tuning
    Tian, Junjiao
    Dai, Xiaoliang
    Ma, Chih-Yao
    He, Zecheng
    Liu, Yen-Cheng
    Kira, Zsolt
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 7836 - 7845
  • [8] Assistive Loading Promotes Goal-Directed Tuning of Stretch Reflex Gains
    Torell, Frida
    Franklin, Sae
    Franklin, David W.
    Dimitriou, Michael
    ENEURO, 2023, 10 (02) : 1 - 17
  • [9] Learning and generation of goal-directed arm reaching from scratch
    Kambara, Hiroyuki
    Kim, Kyoungsik
    Shin, Duk
    Sato, Makoto
    Koike, Yasuharu
    NEURAL NETWORKS, 2009, 22 (04) : 348 - 361
  • [10] Goal-directed Generation of Discrete Structures with Conditional Generative Models
    Mollaysa, Amina
    Paige, Brooks
    Kalousis, Alexandros
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33