Understanding adaptive immune system as reinforcement learning

被引:2
|
作者
Kato, Takuya [1 ]
Kobayashi, Tetsuya J. [1 ,2 ,3 ,4 ]
机构
[1] Univ Tokyo, Grad Sch Informat & Sci, Dept Math Informat, Bunkyo Ku, 7-3-1 Hongo, Tokyo 1138654, Japan
[2] Univ Tokyo, Inst Ind Sci, Meguro Ku, 4-6-1 Komaba, Tokyo 1538505, Japan
[3] Univ Tokyo, Grad Sch Engn, Bunkyo Ku, 7-3-1 Hongo, Tokyo 1138656, Japan
[4] Univ Tokyo, Universal Biol Inst, Bunkyo Ku, 7-3-1 Hongo, Tokyo 1138654, Japan
来源
PHYSICAL REVIEW RESEARCH | 2021年 / 3卷 / 01期
基金
日本科学技术振兴机构; 日本学术振兴会;
关键词
SELECTION; IMMUNOLOGY; ACTIVATION; DRIVEN; MODEL; FAS;
D O I
10.1103/PhysRevResearch.3.013222
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
The adaptive immune system of vertebrates can detect, respond to, and memorize diverse pathogens from past experience. While the clonal selection of T helper (Th) cells is the simple and established mechanism to better recognize new pathogens, the question that still remains unexplored is how the Th cells can acquire better ways to bias the responses of immune cells for eliminating pathogens more efficiently by translating the recognized antigen information into regulatory signals. In this work, we address this problem by associating the adaptive immune network organized by the Th cells with reinforcement learning (RL). By employing recent advancements of network-based RL, we show that the Th immune network can acquire the association between antigen patterns of and the effective responses to pathogens. Moreover, the clonal selection as well as other intercellular interactions are derived as a learning rule of the network. We also demonstrate that the stationary clone-size distribution after learning shares characteristic features with those observed experimentally. Our theoretical framework may contribute to revising and renewing our understanding of adaptive immunity as a learning system.
引用
收藏
页数:19
相关论文
共 50 条
  • [31] Reinforcement learning based adaptive metaheuristics
    Tessari, Michele
    Iacca, Giovanni
    PROCEEDINGS OF THE 2022 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE COMPANION, GECCO 2022, 2022, : 1854 - 1861
  • [32] Reinforcement Learning for Adaptive Mesh Refinement
    Yang, Jiachen
    Dzanic, Tarik
    Petersen, Brenden
    Kudo, Jun
    Mittal, Ketan
    Tomov, Vladimir
    Camier, Jean-Sylvain
    Zhao, Tuo
    Zha, Hongyuan
    Kolev, Tzanio
    Anderson, Robert
    Faissol, Daniel
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 206, 2023, 206
  • [33] An Adaptive Authentication Based on Reinforcement Learning
    Cui, Ziqi
    Zhao, Yongxiang
    Li, Chunxi
    Zuo, Qi
    Zhang, Haipeng
    2019 IEEE INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS - TAIWAN (ICCE-TW), 2019,
  • [34] Adaptive Discretization in Online Reinforcement Learning
    Sinclair, Sean R.
    Banerjee, Siddhartha
    Yu, Christina Lee
    OPERATIONS RESEARCH, 2023, 71 (05) : 1636 - 1652
  • [35] Adaptive Interest for Emphatic Reinforcement Learning
    Klissarov, Martin
    Fakoor, Rasool
    Mueller, Jonas
    Asadi, Kavosh
    Kim, Taesup
    Smola, Alexander J.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [36] Adaptive State Aggregation for Reinforcement Learning
    Hwang, Kao-Shing
    Chen, Yu-Jen
    Jiang, Wei-Cheng
    PROCEEDINGS 2012 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2012, : 2452 - 2456
  • [37] Adaptive Exploration Strategies for Reinforcement Learning
    Hwang, Kao-Shing
    Li, Chih-Wen
    Jiang, Wei-Cheng
    2017 INTERNATIONAL CONFERENCE ON SYSTEM SCIENCE AND ENGINEERING (ICSSE), 2017, : 16 - 19
  • [38] ADAPTIVE GUIDANCE WITH REINFORCEMENT META LEARNING
    Gaudet, Brian
    Linares, Richard
    SPACEFLIGHT MECHANICS 2019, VOL 168, PTS I-IV, 2019, 168 : 4091 - 4109
  • [39] Adaptive operator selection with reinforcement learning
    Durgut, Rafet
    Aydin, Mehmet Emin
    Atli, Ibrahim
    INFORMATION SCIENCES, 2021, 581 : 773 - 790
  • [40] Reinforcement learning of adaptive control strategies
    Leslie K. Held
    Luc Vermeylen
    David Dignath
    Wim Notebaert
    Ruth M. Krebs
    Senne Braem
    Communications Psychology, 2 (1):