Hierarchical Reinforcement Learning Framework for Stochastic Spaceflight Campaign Design

被引:9
|
作者
Takubo, Yuji [1 ]
Chen, Hao [1 ]
Ho, Koki [1 ]
机构
[1] Georgia Inst Technol, Aerosp Engn, Atlanta, GA 30332 USA
关键词
LOGISTICS; SYSTEM;
D O I
10.2514/1.A35122
中图分类号
V [航空、航天];
学科分类号
08 ; 0825 ;
摘要
This paper develops a hierarchical reinforcement learning architecture for multimission spaceflight campaign design under uncertainty, including vehicle design, infrastructure deployment planning, and space transportation scheduling. This problem involves a high-dimensional design space and is challenging especially with uncertainty present. To tackle this challenge, the developed framework has a hierarchical structure with reinforcement learning and network-based mixed-integer linear programming (MILP), where the former optimizes campaign-level decisions (e.g., design of the vehicle used throughout the campaign, destination demand assigned to each mission in the campaign), whereas the latter optimizes the detailed mission-level decisions (e.g., when to launch what from where to where). The framework is applied to a set of human lunar exploration campaign scenarios with uncertain in situ resource utilization performance as a case study. The main value of this work is its integration of the rapidly growing reinforcement learning research and the existing MILP-based space logistics methods through a hierarchical framework to handle the otherwise intractable complexity of space mission design under uncertainty. This unique framework is expected to be a critical steppingstone for the emerging research direction of artificial intelligence for space mission design.
引用
收藏
页码:421 / 433
页数:13
相关论文
共 50 条
  • [21] Framework for design optimization using deep reinforcement learning
    Yonekura, Kazuo
    Hattori, Hitoshi
    STRUCTURAL AND MULTIDISCIPLINARY OPTIMIZATION, 2019, 60 (04) : 1709 - 1713
  • [22] Framework for design optimization using deep reinforcement learning
    Kazuo Yonekura
    Hitoshi Hattori
    Structural and Multidisciplinary Optimization, 2019, 60 : 1709 - 1713
  • [23] Stochastic Reinforcement Learning
    Kuang, Nikki Lijing
    Leung, Clement H. C.
    Sung, Vienne W. K.
    2018 IEEE FIRST INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND KNOWLEDGE ENGINEERING (AIKE), 2018, : 244 - 248
  • [24] A Twin Agent Reinforcement Learning Framework by Integrating Deterministic and Stochastic Policies
    Gupta, Nikita
    Anand, Shikhar
    Kumar, Deepak
    Ramteke, Manojkumar
    Kandath, Harikumar
    Kodamana, Hariprasad
    INDUSTRIAL & ENGINEERING CHEMISTRY RESEARCH, 2024, 63 (24) : 10692 - 10703
  • [25] Adaptive Gait Generation for Hexapod Robots Based on Reinforcement Learning and Hierarchical Framework
    Qiu, Zhiying
    Wei, Wu
    Liu, Xiongding
    ACTUATORS, 2023, 12 (02)
  • [26] Detect, Understand, Act: A Neuro-symbolic Hierarchical Reinforcement Learning Framework
    Mitchener, Ludovico
    Tuckey, David
    Crosby, Matthew
    Russo, Alessandra
    MACHINE LEARNING, 2022, 111 (04) : 1523 - 1549
  • [27] Improving Energy Efficiency in Green Femtocell Networks: A Hierarchical Reinforcement Learning Framework
    Chen, Xianfu
    Zhang, Honggang
    Chen, Tao
    Lasanen, Mika
    2013 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), 2013,
  • [28] Detect, Understand, Act: A Neuro-symbolic Hierarchical Reinforcement Learning Framework
    Ludovico Mitchener
    David Tuckey
    Matthew Crosby
    Alessandra Russo
    Machine Learning, 2022, 111 : 1523 - 1549
  • [29] A Hierarchical Framework for Multi-Lane Autonomous Driving Based on Reinforcement Learning
    Zhang, Xiaohui
    Sun, Jie
    Wang, Yunpeng
    Sun, Jian
    IEEE OPEN JOURNAL OF INTELLIGENT TRANSPORTATION SYSTEMS, 2023, 4 : 626 - 638
  • [30] Hierarchical Reinforcement Learning Framework in Geographic Coordination for Air Combat Tactical Pursuit
    Chen, Ruihai
    Li, Hao
    Yan, Guanwei
    Peng, Haojie
    Zhang, Qian
    ENTROPY, 2023, 25 (10)