SPACECRAFT DECISION-MAKING AUTONOMY USING DEEP REINFORCEMENT LEARNING

被引:0
|
作者
Harris, Andrew [1 ]
Teil, Thibaud [1 ]
Schaub, Hanspeter [2 ]
机构
[1] Univ Colorado, Ann & HJ Smead Dept Aerosp Engn Sci, Boulder, CO 80309 USA
[2] Univ Colorado, Smead Dept Aerosp Engn Sci, Engn, Colorado Ctr Astrodynam Res, 431 UCB, Boulder, CO 80309 USA
关键词
D O I
暂无
中图分类号
V [航空、航天];
学科分类号
08 ; 0825 ;
摘要
The high cost of space mission operations has motivated several space agencies to prioritize the development of autonomous spacecraft control techniques. "Learning" agents present one manner in which autonomous spacecraft can adapt to changing hardware capabilities, environmental parameters, or mission objectives while minimizing dependence on ground intervention. This work considers the frameworks and tools of deep reinforcement learning to address high-level mission planning and decision-making problems for autonomous spacecraft, under the assumption that sub-problems have been addressed through design. Two representative problems reflecting challenges of autonomous orbit insertion and science operations planning, respectively, are presented as Partially-Observable Markov Decision Processes (POMDP) and addressed with Deep Reinforcement Learners to demonstrate the benefits, pitfalls, considerations inherent to this approach. Sensitivity to initial conditions and learning strategy are discussed and analyzed. Results from selected problems demonstrate the use of reinforcement learning to improve or fine-tune prior policies within a mode-oriented paradigm while maintaining robustness to uncertain environmental parameters.
引用
收藏
页码:1757 / 1775
页数:19
相关论文
共 50 条
  • [31] Decision-Making for Autonomous Vehicles in Random Task Scenarios at Unsignalized Intersection Using Deep Reinforcement Learning
    Xiao, Wenxuan
    Yang, Yuyou
    Mu, Xinyu
    Xie, Yi
    Tang, Xiaolin
    Cao, Dongpu
    Liu, Teng
    [J]. IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2024, 73 (06) : 7812 - 7825
  • [32] HMM for discovering decision-making dynamics using reinforcement learning experiments
    Guo, Xingche
    Zeng, Donglin
    Wang, Yuanjia
    [J]. BIOSTATISTICS, 2024,
  • [33] Decision Making in Monopoly Using a Hybrid Deep Reinforcement Learning Approach
    Bonjour, Trevor
    Haliem, Marina
    Alsalem, Aala
    Thomas, Shilpa
    Li, Hongyu
    Aggarwal, Vaneet
    Kejriwal, Mayank
    Bhargava, Bharat
    [J]. IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2022, 6 (06): : 1335 - 1344
  • [34] Uncertainty-based Decision Making Using Deep Reinforcement Learning
    Zhao, Xujiang
    Hu, Shu
    Cho, Jin-Hee
    Chen, Feng
    [J]. 2019 22ND INTERNATIONAL CONFERENCE ON INFORMATION FUSION (FUSION 2019), 2019,
  • [35] Reinforcement learning-based decision-making for spacecraft pursuit-evasion game in elliptical orbits
    Yu, Weizhuo
    Liu, Chuang
    Yue, Xiaokui
    [J]. CONTROL ENGINEERING PRACTICE, 2024, 153
  • [36] A Deep Reinforcement Learning Decision-Making Approach for Adaptive Cruise Control in Autonomous Vehicles
    Ghraizi, Dany
    Talj, Reine
    Francis, Clovis
    [J]. 2023 21ST INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS, ICAR, 2023, : 71 - 78
  • [37] Decision-making Method for Transient Stability Emergency Control Based on Deep Reinforcement Learning
    Li, Honghao
    Zhang, Pei
    Liu, Zhao
    [J]. Dianli Xitong Zidonghua/Automation of Electric Power Systems, 2023, 47 (05): : 144 - 152
  • [38] A UAV Maneuver Decision-Making Algorithm for Autonomous Airdrop Based on Deep Reinforcement Learning
    Li, Ke
    Zhang, Kun
    Zhang, Zhenchong
    Liu, Zekun
    Hua, Shuai
    He, Jianliang
    [J]. SENSORS, 2021, 21 (06)
  • [39] Network Defense Decision-Making Based on Deep Reinforcement Learning and Dynamic Game Theory
    Huang, Wanwei
    Yuan, Bo
    Wang, Sunan
    Ding, Yi
    Li, Yuhua
    [J]. CHINA COMMUNICATIONS, 2023, 21 (09) : 262 - 275
  • [40] Lane Change Decision-Making through Deep Reinforcement Learning with Driver's Inputs
    Wu, Yanbin
    Yin, Zhishuai
    Yu, Jia
    Zhang, Ming
    [J]. 2022 IEEE 7TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION ENGINEERING, ICITE, 2022, : 314 - 319