Hierarchical Reinforcement Learning Framework for Stochastic Spaceflight Campaign Design

被引:9
|
作者
Takubo, Yuji [1 ]
Chen, Hao [1 ]
Ho, Koki [1 ]
机构
[1] Georgia Inst Technol, Aerosp Engn, Atlanta, GA 30332 USA
关键词
LOGISTICS; SYSTEM;
D O I
10.2514/1.A35122
中图分类号
V [航空、航天];
学科分类号
08 ; 0825 ;
摘要
This paper develops a hierarchical reinforcement learning architecture for multimission spaceflight campaign design under uncertainty, including vehicle design, infrastructure deployment planning, and space transportation scheduling. This problem involves a high-dimensional design space and is challenging especially with uncertainty present. To tackle this challenge, the developed framework has a hierarchical structure with reinforcement learning and network-based mixed-integer linear programming (MILP), where the former optimizes campaign-level decisions (e.g., design of the vehicle used throughout the campaign, destination demand assigned to each mission in the campaign), whereas the latter optimizes the detailed mission-level decisions (e.g., when to launch what from where to where). The framework is applied to a set of human lunar exploration campaign scenarios with uncertain in situ resource utilization performance as a case study. The main value of this work is its integration of the rapidly growing reinforcement learning research and the existing MILP-based space logistics methods through a hierarchical framework to handle the otherwise intractable complexity of space mission design under uncertainty. This unique framework is expected to be a critical steppingstone for the emerging research direction of artificial intelligence for space mission design.
引用
收藏
页码:421 / 433
页数:13
相关论文
共 50 条
  • [31] A Safe Hierarchical Planning Framework for Complex Driving Scenarios based on Reinforcement Learning
    Li, Jinning
    Sun, Liting
    Chen, Jianyu
    Tomizuka, Masayoshi
    Zhan, Wei
    2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 2660 - 2666
  • [32] Concurrent Hierarchical Reinforcement Learning
    Marthi, Bhaskara
    Russell, Stuart
    Latham, David
    Guestrin, Carlos
    19TH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE (IJCAI-05), 2005, : 779 - 785
  • [33] Hierarchical reinforcement learning with OMQ
    Shen, Jing
    Liu, Haibo
    Gu, Guochang
    PROCEEDINGS OF THE FIFTH IEEE INTERNATIONAL CONFERENCE ON COGNITIVE INFORMATICS, VOLS 1 AND 2, 2006, : 584 - 588
  • [34] On Efficiency in Hierarchical Reinforcement Learning
    Wen, Zheng
    Precup, Doina
    Ibrahimi, Morteza
    Barreto, Andre
    Van Roy, Benjamin
    Singh, Satinder
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [35] Hierarchical Imitation and Reinforcement Learning
    Le, Hoang M.
    Jiang, Nan
    Agarwal, Alekh
    Dudik, Miroslav
    Yue, Yisong
    Daume, Hal, III
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80
  • [36] Budgeted Hierarchical Reinforcement Learning
    Leon, Aurelia
    Denoyer, Ludovic
    2018 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2018,
  • [37] Reinforcement learning for quadrupedal locomotion with design of continual-hierarchical curriculum
    Kobayashi, Taisuke
    Sugino, Toshiki
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2020, 95 (95)
  • [38] Stochastic optimal generation command dispatch based on improved hierarchical reinforcement learning approach
    Yu, T.
    Wang, Y. M.
    Ye, W. J.
    Zhou, B.
    Chan, K. W.
    IET GENERATION TRANSMISSION & DISTRIBUTION, 2011, 5 (08) : 789 - 797
  • [39] RLOP: A Framework Design for Offset Prefetching Combined with Reinforcement Learning
    Huang, Yan
    Wang, Zhanyang
    PROCEEDINGS OF THE 13TH INTERNATIONAL CONFERENCE ON COMPUTER ENGINEERING AND NETWORKS, VOL III, CENET 2023, 2024, 1127 : 90 - 99
  • [40] A Computational Framework for Robot Hand Design via Reinforcement Learning
    Zhang, Zhong
    Zheng, Yu
    Hu, Zhe
    Liu, Lezhang
    Zhao, Xuan
    Li, Xiong
    Pan, Jia
    2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2021, : 7216 - 7222