Neural multi-task learning in drug design

被引:2
|
作者
Allenspach, Stephan [1 ]
Hiss, Jan A. [1 ]
Schneider, Gisbert [1 ]
机构
[1] Swiss Fed Inst Technol, Dept Chem & Appl Biosci, Zurich, Switzerland
基金
瑞士国家科学基金会;
关键词
LIGAND BINDING-AFFINITY; MATRIX COMPLETION; NETWORKS; INFORMATION; PREDICTION; DISCOVERY; SYSTEM; MODEL;
D O I
10.1038/s42256-023-00785-4
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multi-task learning (MTL) is a machine learning paradigm that aims to enhance the generalization of predictive models by leveraging shared information across multiple tasks. The recent breakthroughs achieved by deep neural network models in various domains have sparked hope for similar advances in the chemical sciences. In this Perspective, we provide insights into the current state and future potential of neural MTL models applied to computer-assisted drug design. In the context of drug discovery, one prominent application of MTL is protein-ligand binding affinity prediction, in which individual proteins are considered tasks. Here we introduce the fundamental principles of MTL and propose a framework for categorizing MTL models on the basis of their architecture. This framework enables us to present a comprehensive overview and comparison of a selection of MTL models that have been successfully utilized in drug design. Subsequently, we delve into the current challenges associated with the applications of MTL. One of the key challenges lies in defining suitable representations of the molecular entities under investigation and the respective machine learning tasks. Training a machine learning model with multiple tasks can create more-useful representations and achieve better performance than training models for each task separately. In this Perspective, Allenspach et al. summarize and compare multi-task learning methods for computer-aided drug design.
引用
收藏
页码:124 / 137
页数:14
相关论文
共 50 条
  • [1] Neural multi-task learning in drug design
    Stephan Allenspach
    Jan A. Hiss
    Gisbert Schneider
    [J]. Nature Machine Intelligence, 2024, 6 : 124 - 137
  • [2] Convex Multi-Task Learning with Neural Networks
    Ruiz, Carlos
    Alaiz, Carlos M.
    Dorronsoro, Jose R.
    [J]. HYBRID ARTIFICIAL INTELLIGENT SYSTEMS, HAIS 2022, 2022, 13469 : 223 - 235
  • [3] A Pseudo-task Design in Multi-task Learning Deep Neural Network for Speaker Recognition
    Lu, Xugang
    Shen, Peng
    Tsao, Yu
    Kawai, Hisashi
    [J]. 2016 10TH INTERNATIONAL SYMPOSIUM ON CHINESE SPOKEN LANGUAGE PROCESSING (ISCSLP), 2016,
  • [4] Multi-task gradient descent for multi-task learning
    Bai, Lu
    Ong, Yew-Soon
    He, Tiantian
    Gupta, Abhishek
    [J]. MEMETIC COMPUTING, 2020, 12 (04) : 355 - 369
  • [5] Multi-task gradient descent for multi-task learning
    Lu Bai
    Yew-Soon Ong
    Tiantian He
    Abhishek Gupta
    [J]. Memetic Computing, 2020, 12 : 355 - 369
  • [6] Multi-task Learning for Multilingual Neural Machine Translation
    Wang, Yiren
    Zhai, ChengXiang
    Awadalla, Hany Hassan
    [J]. PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 1022 - 1034
  • [7] Episodic Multi-Task Learning with Heterogeneous Neural Processes
    Shen, Jiayi
    Zhen, Xiantong
    Wang, Qi
    Worring, Marcel
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [8] Neural Multi-Task Learning for Citation Function and Provenance
    Su, Xuan
    Prasad, Animesh
    Kan, Min-Yen
    Sugiyama, Kazunari
    [J]. 2019 ACM/IEEE JOINT CONFERENCE ON DIGITAL LIBRARIES (JCDL 2019), 2019, : 394 - 395
  • [9] Dynamic Multi-Task Learning with Convolutional Neural Network
    Fang, Yuchun
    Ma, Zhengyan
    Zhang, Zhaoxiang
    Zhang, Xu-Yao
    Bai, Xiang
    [J]. PROCEEDINGS OF THE TWENTY-SIXTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 1668 - 1674
  • [10] Scheduled Multi-task Learning for Neural Chat Translation
    Liang, Yunlong
    Meng, Fandong
    Xu, Jinan
    Chen, Yufeng
    Zhou, Jie
    [J]. PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 4375 - 4388