SPACECRAFT DECISION-MAKING AUTONOMY USING DEEP REINFORCEMENT LEARNING

被引:0
|
作者
Harris, Andrew [1 ]
Teil, Thibaud [1 ]
Schaub, Hanspeter [2 ]
机构
[1] Univ Colorado, Ann & HJ Smead Dept Aerosp Engn Sci, Boulder, CO 80309 USA
[2] Univ Colorado, Smead Dept Aerosp Engn Sci, Engn, Colorado Ctr Astrodynam Res, 431 UCB, Boulder, CO 80309 USA
关键词
D O I
暂无
中图分类号
V [航空、航天];
学科分类号
08 ; 0825 ;
摘要
The high cost of space mission operations has motivated several space agencies to prioritize the development of autonomous spacecraft control techniques. "Learning" agents present one manner in which autonomous spacecraft can adapt to changing hardware capabilities, environmental parameters, or mission objectives while minimizing dependence on ground intervention. This work considers the frameworks and tools of deep reinforcement learning to address high-level mission planning and decision-making problems for autonomous spacecraft, under the assumption that sub-problems have been addressed through design. Two representative problems reflecting challenges of autonomous orbit insertion and science operations planning, respectively, are presented as Partially-Observable Markov Decision Processes (POMDP) and addressed with Deep Reinforcement Learners to demonstrate the benefits, pitfalls, considerations inherent to this approach. Sensitivity to initial conditions and learning strategy are discussed and analyzed. Results from selected problems demonstrate the use of reinforcement learning to improve or fine-tune prior policies within a mode-oriented paradigm while maintaining robustness to uncertain environmental parameters.
引用
下载
收藏
页码:1757 / 1775
页数:19
相关论文
共 50 条
  • [31] Decision-making for Connected and Automated Vehicles in Chanllenging Traffic Conditions Using Imitation and Deep Reinforcement Learning
    Jinchao Hu
    Xu Li
    Weiming Hu
    Qimin Xu
    Yue Hu
    International Journal of Automotive Technology, 2023, 24 : 1589 - 1602
  • [32] HMM for discovering decision-making dynamics using reinforcement learning experiments
    Guo, Xingche
    Zeng, Donglin
    Wang, Yuanjia
    BIOSTATISTICS, 2024,
  • [33] Uncertainty-based Decision Making Using Deep Reinforcement Learning
    Zhao, Xujiang
    Hu, Shu
    Cho, Jin-Hee
    Chen, Feng
    2019 22ND INTERNATIONAL CONFERENCE ON INFORMATION FUSION (FUSION 2019), 2019,
  • [34] Decision Making in Monopoly Using a Hybrid Deep Reinforcement Learning Approach
    Bonjour, Trevor
    Haliem, Marina
    Alsalem, Aala
    Thomas, Shilpa
    Li, Hongyu
    Aggarwal, Vaneet
    Kejriwal, Mayank
    Bhargava, Bharat
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2022, 6 (06): : 1335 - 1344
  • [35] Reinforcement learning-based decision-making for spacecraft pursuit-evasion game in elliptical orbits
    Yu, Weizhuo
    Liu, Chuang
    Yue, Xiaokui
    CONTROL ENGINEERING PRACTICE, 2024, 153
  • [36] A Deep Reinforcement Learning Decision-Making Approach for Adaptive Cruise Control in Autonomous Vehicles
    Ghraizi, Dany
    Talj, Reine
    Francis, Clovis
    2023 21ST INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS, ICAR, 2023, : 71 - 78
  • [37] Decision-making Method for Transient Stability Emergency Control Based on Deep Reinforcement Learning
    Li H.
    Zhang P.
    Liu Z.
    Dianli Xitong Zidonghua/Automation of Electric Power Systems, 2023, 47 (05): : 144 - 152
  • [38] Network Defense Decision-Making Based on Deep Reinforcement Learning and Dynamic Game Theory
    Huang Wanwei
    Yuan Bo
    Wang Sunan
    Ding Yi
    Li Yuhua
    China Communications, 2024, 21 (09) : 262 - 275
  • [39] Multi-intent autonomous decision-making for air combat with deep reinforcement learning
    Luyu Jia
    Chengtao Cai
    Xingmei Wang
    Zhengkun Ding
    Junzheng Xu
    Kejun Wu
    Jiaqi Liu
    Applied Intelligence, 2023, 53 : 29076 - 29093
  • [40] Predictive maintenance decision-making for serial production lines based on deep reinforcement learning
    Cui P.
    Wang J.
    Zhang W.
    Li Y.
    Jisuanji Jicheng Zhizao Xitong/Computer Integrated Manufacturing Systems, CIMS, 2021, 27 (12): : 3416 - 3428